Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discuss] Proposal: support integration sets #132

Closed
mtojek opened this issue Feb 10, 2021 · 35 comments
Closed

[Discuss] Proposal: support integration sets #132

mtojek opened this issue Feb 10, 2021 · 35 comments
Labels
Team:Elastic-Agent Label for the Agent team Team:Fleet Label for the Fleet team Team:Integrations Label for the Integrations team

Comments

@mtojek
Copy link
Contributor

mtojek commented Feb 10, 2021

Hi everyone,

The goal of this issue is to discuss challenges, benefits and drawbacks of introducing a new package structure (potentially v2?), which can satisfy the requirement of service granularity (reported in #122). I've prepared a sample package to visualize the proposal.

MySQL pseudo-integration: https://github.com/mtojek/integrations/tree/v2-integration-sets/packages/mysql-2

Requirements:

  • fit in the new service granularity model proposed by @sorantis
  • have single manifest file (for package and all data streams), so it would be easier for Kibana and EPR to load and process package description
  • keep large contents (pipeline definitions, icons, screenshots, agent's configs) in separate resources
  • support extending resources, e.g. manifests contain vars on every level, fields are on multiple levels in the directory tree

Notes/observations:

  • Integration policies defines groups of inputs: logs, metrics, packets (packetbeat), synthetics (uptime). I suppose we'll need also traces
  • With this approach, we'll drop the original idea of a package - a group of dashboards, pipelines, etc. The new concept changes the meaning of not just integrations.
  • data_stream names are aligned with the manifest structure, e.g.:
    integration_sets:
    - name: basic
      integration_policies:
        logs:
          inputs:
            - name: error
              ilm_policy: custom-hot-warm-logs
              vars: ~ # ...
    .. gives - logs-mysql-basic.error-shared. I suppose there should be an option for the end-user to correct/change it.

Please do not consider this is as future approach or a kick-off for adjustments, it's rather an exercise to understand and estimate the amount of work in multiple places. It would be nice to hear some feedback from all parties.

EDIT:

Do not look at the manifest as it's the final/target format. Please rather focus on concepts/requirements.

@mtojek
Copy link
Contributor Author

mtojek commented Feb 10, 2021

Let me ask few people to join the discussion.

Kibana: @jen-huang @skh
AWS Integration/Integrations: @kaiyan-sheng @masci @andrewkroh
EPR: @ruflin @ycombinator
Fleet: @ph @nchaulet

It would be great to justify first if we're able to introduce such changes and when would be the right time.

@ph
Copy link

ph commented Feb 10, 2021

Let's keep @mostlyjason @andresrc in the loop

@ph ph added Team:Elastic-Agent Label for the Agent team Team:Fleet Label for the Fleet team Team:Integrations Label for the Integrations team labels Feb 10, 2021
@sorantis
Copy link

@sorantis
Copy link

sorantis commented Feb 10, 2021

Thank you for starting this discussion @mtojek.

One use case I was thinking about is the usage of vars in the new architecture. I see two cases for common configuration:

  1. vars that are applicable to all policies. I think you have it covered in your example and it should work like a charm for AWS account details for example.
  2. vars that are applicable to one policy only. For example, for NGINX metrics the user might want to use HTTP and SSL configuration to access password protected NGINX server, while for logs it’s not really necessary.

UPDATE
I see that vars can also be specified for each policy as well as input. Which should cover the 2 cases!

integration_policies:
      logs:
        inputs:
          - name: error
            ilm_policy: custom-hot-warm-logs
            vars: ~ # ...
          - name: slowlog
            ilm_policy: hot-warm
            vars: ~ # ...
        vars: ~ # ...

@kaiyan-sheng
Copy link
Contributor

Thanks @mtojek and @sorantis! I like the design of having vars at the integration_policies so we can share these vars across metrics and logs policy. For AWS, this is where the common credentials will be. One question I have is: why we need the inputs layer/folder under logs and metrics? Can we put the individual inputs directly under logs or metrics folder? For example:

.
├── README.md
├── fields
│   └── README.md
├── integration_sets
│   ├── basic
│   │   ├── fields
│   │   │   └── README.md
│   │   └── integration_policies
│   │       ├── logs
│   │       │       ├── error
│   │       │       │   ├── agent
│   │       │       │   │   └── stream.yml.hbs
│   │       │       │   └── fields
│   │       │       │       └── README.md
│   │       │       └── slowlog
│   │       └── metrics
│   │               └── perf
│   ├── enterprise
│   ├── galera
│   ├── traffic
│   └── uptime
└── manifest.yml

@mtojek
Copy link
Contributor Author

mtojek commented Feb 10, 2021

Hey @kaiyan-sheng , thanks for taking a look.

I assume you're referring to this level: https://github.com/mtojek/integrations/tree/v2-integration-sets/packages/mysql-2/integration_sets/basic/integration_policies . The goal of the additional directory with inputs (input dir) is to leave an open door for potential extensions in the future, if we decide to put sth aside to inputs. Maybe dedicated vars for all "logs" inputs?

@ycombinator
Copy link
Contributor

ycombinator commented Feb 10, 2021

One of the stated requirements is:

have single manifest file (for package and all data streams), so it would be easier for Kibana and EPR to load and process package description

I would not optimize for machine reading (Kibana and EPR) when thinking about the format/structure of the package manifests. We can always use elastic-package build to stitch individual manifests into a single one to optimize for machine consumption if needed.

When thinking of the package structure I would optimize for human consumption instead, as this will impact package developers more frequently. If, by that measure, a single manifest file is more convenient then let's go with that. But if multiple files, grouped by logical folders (like we have today) is more easily understood and maintainable by humans, I would lean towards that.

@ycombinator
Copy link
Contributor

ycombinator commented Feb 10, 2021

With this approach, we'll drop the original idea of a package - a group of dashboards, pipelines, etc. The new concept changes the meaning of not just integrations.

Where would stack assets (index templates, dashboards, pipelines, etc.) be defined? Don't they need to come from somewhere associated with a package version so Kibana can install them?

Can you flesh out one more integration set in your sample package? I'm curious to know if there's any repetition of contents between two integration sets (e.g. agent policies or fields).

Speaking of fields, I see fields folders in the sample spec but no field definitions under them (just READMEs so far). I also don't see field definitions in the top-level manifest.yml file. Where would field definitions come from? This may be related to the first question above about stack assets.

@mtojek
Copy link
Contributor Author

mtojek commented Feb 11, 2021

Thank you, Shaunak, for looking into this.

I would not optimize for machine reading (Kibana and EPR) when thinking about the format/structure of the package manifests. We can always use elastic-package build to stitch individual manifests into a single one to optimize for machine consumption if needed.

Agree, but I'm not sure if the current structure is sufficient to model the "integration-sets" concept. If so, then all names will be wrong (policy templates -> integration policies?). In this case building procedure will mean mapping entities from one domain to another. I'm not sure if this is the expected way to go, definitely it will hard to debug/RCA anything if integrations stay as is and the Kibana side changes (new domain).

Side note - if we keep the human model aligned with machine model (or even have a single one), the build procedure will be less error prone.

But if multiple files, grouped by logical folders (like we have today) is more easily understood and maintainable by humans, I would lean towards that.

Sure, we can split the single manifest into multiple entries. It just depends what is the current user experience, I mean, is it easier to design new integration while looking at all integration policies or it's fine to navigate between directories.

Where would stack assets (index templates, dashboards, pipelines, etc.) be defined? Don't they need to come from somewhere associated with a package version so Kibana can install them?

Frankly speaking, I skipped them intentionally in the initial draft to agree first on the core model (integration sets, integration policies, etc.). Things like vars, fields, kibana saved objects can be added later on whenever we want (even on every level).

Speaking of fields, I see fields folders in the sample spec but no field definitions under them (just READMEs so far). I also don't see field definitions in the top-level manifest.yml file. Where would field definitions come from? This may be related to the first question above about stack assets.

As I stated above I wanted to focus on the core structure first. Regarding the repetitions, I don't think we'll be able to get rid of them. Imagine the case in which some fields are valid for 2/3 metrics integration policies. The presented model doesn't cover this.

@ycombinator
Copy link
Contributor

Agree, but I'm not sure if the current structure is sufficient to model the "integration-sets" concept.

I don't think I was suggesting reusing the current structure (as in, what live packages use today). All my comments here are referring to the new proposed structure.

To be clear: I'm suggesting that the new proposed structure that you are demonstrating in https://github.com/mtojek/integrations/tree/v2-integration-sets/packages/mysql-2 should not optimize for machine consumption but rather human consumption to make package authors' lives easier. Via elastic-package build we could transform the human-friendly structure into anything that's more machine-friendly so we get the best of both worlds, each optimized for consumers at different points of the package lifecycle - creation/maintenance (done by humans) vs. serving/parsing (done by machines). This is similar to the idea of a high-level programming language coupled with a compiler.

Side note - if we keep the human model aligned with machine model (or even have a single one), the build procedure will be less error prone.

I'm not sure about this. Certainly if the two are the same there's no build process as such so it's completely error-prone. But I'm not sure this is worth making the developer experience poorer either. We can have tests to make sure the build procedure produces expect output for a given input.

Now, it may turn out that humans actually would prefer a single large manifest file. In that case, I'm all for it! Maybe this is something we need more feedback from current package authors on? I do think that to provide such feedback it would be helpful to have the sample package more fleshed out. This would give a more accurate picture of what a package developer would need to create and maintain.

Frankly speaking, I skipped them intentionally in the initial draft to agree first on the core model (integration sets, integration policies, etc.). Things like vars, fields, kibana saved objects can be added later on whenever we want (even on every level).

As I stated above I wanted to focus on the core structure first.

It is to inform decisions about the core structure that I'd like to see it more fleshed out 🙂.

Regarding the repetitions, I don't think we'll be able to get rid of them. Imagine the case in which some fields are valid for 2/3 metrics integration policies. The presented model doesn't cover this.

I think repetitions could be avoided by referencing, just as an example. But before we jump to a solution, it would be good know, at least for me, where repetitions could show up in the structure.

I guess I'm uncomfortable having a strong opinion on the proposed structure without seeing more use of it — both within a single integration (more fleshing out of the sample package) but also across different types of integrations (maybe pick a few diverse examples of integrations from @sorantis's doc and create sample packages for those as well?) You don't need to flesh out every field or every asset in general, but seeing 2-3 instances of every type of asset would be enough to see patterns, I think.

I'm on PTO at the moment but, if you prefer, I can do all the work I'm asking for here once I'm back via a PR to your branch.

@ph
Copy link

ph commented Feb 11, 2021

@jen-huang This is the other issue related to the doc, I've shared with you.

@sorantis
Copy link

Can you flesh out one more integration set in your sample package? I'm curious to know if there's any repetition of contents between two integration sets (e.g. agent policies or fields).

@kaiyan-sheng I think this should be a good exercise to see how the proposed changes look like in the AWS case.

@ruflin
Copy link
Member

ruflin commented Feb 15, 2021

I'm not sure I would throw the current structure over board. There are 2 parts in the packages:

  1. How data is collected, processed and mapped: data_streams
  2. How the configuration is built for the collection

We will always need 1. and I would assume it does not really change if we have integration sets. What changes is 2. But for the grouping of 2. we already have policy_templates. Lets ignore that the naming is off, but it seems it already has most of the grouping we need with a few extensions?

One thing to keep in mind is multiple integration sets can reuse the same data_streams. This is possible in the current structure but not sure how this would work with the proposal here.

@mtojek
Copy link
Contributor Author

mtojek commented Feb 15, 2021

I reckon that the naming part is crucial in this case, not to end up with something which is not descriptive at all. Even now it's already a bit tangled (starting with package's manifest.yml, down to data stream's manifest.yml), the flow is as follows: policy_template -> input -> streams -> input. I know that package description is processed by machines, but don't forget that developers will have to write it (not generate it and click out with a designer/wizard).

The missing part is grouping of policy templates, including screenshots, icons for particular groups. We can add these definitions aside, but I'm afraid that it might be hard to justify the design of the structure not referring to backwards compatibility or historical reasons.

The reason why I created this thread is to evaluate if we can leverage from the package spec versioning (v2?) and invent a more suitable format to current requirements. Maybe it isn't doable at all due to complexity of various parts and we HAVE to stick to spec extensions/patches.

BTW I'm happy to zoom and talk about all cases. Later on It would be great to write down a short decision log entry justifying the decision.

@sorantis
Copy link

I know that package description is processed by machines, but don't forget that developers will have to write it (not generate it and click out with a designer/wizard).

+1 Naming convention aside, if we are to define a v2 of package specs we should do it with the target user in mind and optimize for package developers.

@ruflin
Copy link
Member

ruflin commented Feb 17, 2021

Here an alternative approach for discussion. The assumptions I make on my end:

  • A package is and stays a collection of assets for the Elastic Stack. A package can be just 2 dashboards
  • Data streams can be installed / setup data stream naming scheme independent of any integration
  • Multiple inputs can send data to the same data stream. For example apache logs can be collected through syslog or file, same data stream is used.

All the below is for starting a discussion and is not intended to be 100% accurate

.... #existing package details


# Credentials are shared across all of the package
credentials:
  - name: access_key_id
    type: text
    title: Access Key ID
    multi: false
    required: false
    show_user: false
    default: ""

policy_templates/integration_sets:
  - name: ec2
    title: AWS logs and metrics
    description: Collect logs and metrics from AWS instances
    
    # This is new
    screenshots: ...
    icon: ...
    readme: ... # Will each integrations et have a readme? Where are the data stream fields shown as they are potentially
                # shared across integration sets?
    
    # Which data stream configs to show in UI.
    # Alternative is what Marcin had in his proposal to direclty use these names as key and put the inputs under it
    # The downside of that is that the same input cannot be reused and has to be specified each time. Need to check on actual
    # packages what works better.
    uses: ec2_logs, ec2_metrics
    
    # Default grouping in the UI should be by type (metrics, logs) so this should not be needed
    grouping: type
    
    # Default configs for the inputs to be used
    inputs:
      - type: s3
        title: Collect S3 metrics
        description: ...
        
        # maybe it requires a custom template for the data stream
        template: ...
        vars:
          - name: visibility_timeout
            type: text
            title: Visibility Timeout
            multi: false
            required: false
            show_user: false
            description: ...


      - type: aws/metrics
        title: Collect metrics from AWS instances
        description: Collecting AWS
        vars:
          - name: access_key_id
            type: text
            title: Access Key ID
            multi: false
            required: false
            show_user: false
            default: ...
            
  - name: DynamoDB
    title: DynamoDB
    description: ...
    
    # This is new
    screenshots: ...
    icon: ...
    readme: ... # should have a good default, same as name / id?
    
    
    uses: [dynamodb]
    grouping: type
    
    inputs:

      - type: aws/metrics
        title: Collect metrics from AWS instances
        description: Collecting AWS 
        vars:

@ycombinator
Copy link
Contributor

ycombinator commented Feb 17, 2021

@ruflin What I like about your proposal is that it promotes reuse of a package's components, which means less repetition for a package author. I'm in favor of this from a maintenance perspective.

A couple of suggestions to tweak it, based on ideas from @mtojek's original proposal:

  • Instead of credentials specifically, I think we could generalize this a bit more as package-level vars / configuration.
  • Just as we have folders today for each data stream, data_stream/<name>, we could have folders for each integration set, i.e. integration_set/<name>. This gives us a place to store larger assets for integration sets, e.g. README files. I would still keep the policy_templates / integration_sets section in the main manifest file with the name field of the policy template / integration set corresponding to the folder name.

Additionally, a couple of minor suggestions of my own:

  • Let's rename uses to data_streams so it's a bit more explicit 🙂.
  • For naming, we can support policy_templates (existing name) and integrations_sets (new name) both, while deprecating policy_templates. Of course, this means the spec has to support it but also Kibana has to first look for integration_sets and, if absent, fall back to policy_templates.

@ycombinator
Copy link
Contributor

BTW, whatever latest proposal we end up with after today's meeting, I still think it needs to be tested with a few fleshed out packages just to make sure it's going to work for all of the use cases @sorantis mentioned in his original doc.

@sorantis
Copy link

sorantis commented Feb 17, 2021

Based on our conversation here's an alternative structure that combines Marcin's and existing schema and makes it possible to create groups that each input can refer to. It can be called a group or tab or whatever name is more applicable ti its purpose.

groups: 
- name: logs
  title: "My beautiful logs"
- name: metrics
  title: "Foobar"
- name: synthetics
  ....
- name: packets

policy_templates:
  - name: basic
    screenshots:
    icons:
    readme:
    searchable: true
    description:
    title:
    inputs:
       - name: error
         type: 
         tabs: logs
         vars: ~ # ...
       - name: slowlog
         tabs: logs
         vars: ~ # ...
       - name: perf
         group: metrics
         vars: ~ # ...
    vars: ~ # ...
  - name: enterprise
    inputs:
       - name: error
         tabs: logs
         vars: ~ # ...
       - name: slowlog
         group: logs
         vars: ~ # ...
       - name: perf
         group: metrics
         vars: ~ # ...
  - name: galera
  - name: traffic
  - name: uptime
vars:
  - name: mysql_dsn
    title: MySQL DSN
    type: text
    default: "tcp(127.0.0.1:3306)/"
  - name: mysql_username
    title: Username
    type: text
    default: "root"
  - name: mysql_password
    title: Password
    type: password
    default: "test"
ilm_policies:
  - name: custom-hot-warm-logs
    # more properties
  - name: custom-hot-warm-metrics
    # more properties

@kaiyan-sheng
Copy link
Contributor

Question about icons: with this structure are we defining icons in each policy template? And where we gonna put icons for the individual input, such as EC2? Looking at AWS package as an example, AWS icon instead of being under basic policy_templates, should it be at a higher level? Also when we want to add an icon for EC2, is this icon being added both under ec2_logs and ec2_metrics?

name: aws
title: AWS
version: 0.0.1
license: basic
description: AWS Integration
type: integration
release: beta
icons:
  - src: /img/logo_aws.svg
    title: logo aws
    size: 32x32
    type: image/svg+xml
groups:
  - name: logs
    title: "Collect logs from AWS services"
  - name: metrics
    title: "Collect metrics from AWS services"
vars:
  - name: access_key_id
    type: text
    title: Access Key ID
    multi: false
    required: true
    show_user: true
  - name: secret_access_key
    type: text
    title: Secret Access Key
    multi: false
    required: true
    show_user: true
policy_templates:
  - name: basic
    description: Basic
    inputs:
      - name: ec2_logs
        type: s3
        group: logs
        title: Collect logs from AWS EC2
        description: Collecting AWS EC2 logs
        icons:
          - src: /img/logo_aws_ec2.svg
            title: logo aws ec2
            size: 32x32
            type: image/svg+xml
        vars:
          - name: visibility_timeout
            type: text
            title: Visibility Timeout
            multi: false
            required: false
            show_user: false
            description: The duration that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request.  The maximum is 12 hours.
          - name: api_timeout
            type: text
            title: API Timeout
            multi: false
            required: false
            show_user: false
            description: The maximum duration of AWS API can take. The maximum is half of the visibility timeout value.
          - name: queue_url
            type: text
            title: Queue URL
            multi: false
            required: true
            show_user: true
            description: URL of the AWS SQS queue that messages will be received from.
      - name: ec2_metrics
        type: aws/metrics
        group: metrics
        title: Collect metrics from CloudWatch for Amazon EC2
        description: Collecting Amazon EC2 metrics from CloudWatch
        icons:
          - src: /img/logo_aws_ec2.svg
            title: logo aws ec2
            size: 32x32
            type: image/svg+xml
        vars:
          - name: period
            type: text
            title: Period
            multi: false
            required: true
            show_user: true
            default: 5m
          - name: regions
            type: text
            title: Regions
            multi: true
            required: false
            show_user: true
      - name: dynamodb
        type: aws/metrics
        group: metrics
        title: Collect metrics from CloudWatch for Amazon DynamoDB
        description: Collecting Amazon DynamoDB metrics from CloudWatch
        icons:
          - src: /img/logo_aws_dynamodb.svg
            title: logo aws dynamodb
            size: 32x32
            type: image/svg+xml
        vars:
          - name: period
            type: text
            title: Period
            multi: false
            required: true
            show_user: true
            default: 5m
          - name: regions
            type: text
            title: Regions
            multi: true
            required: false
            show_user: true

@sorantis
Copy link

@kaiyan-sheng I think in your example - name: dynamodb should be define right under the policy_templates section, because it's a new policy set.

@sorantis
Copy link

A question came up about were the user will land after clicking e.g. EC2 tile. If we keep README at the package level, then the behavior won't change - the user will see the AWS details page. Alternatively we can define README at the policy set level, which will allow us to create "tile" specific detail pages.

Thoughts?

@mtojek
Copy link
Contributor Author

mtojek commented Feb 18, 2021

I suggest to start with baby steps and don't change the behavior for READMEs. We can adjust it later in next iterations, let's focus first on delivery something.

I'm working on transforming the concept into official spec (PR).

@ruflin
Copy link
Member

ruflin commented Feb 18, 2021

@kaiyan-sheng Icons should be under policy templates not inputs and dynamodb from @sorantis too. Lets wait on what @mtojek comes up with :-D

@mtojek
Copy link
Contributor Author

mtojek commented Feb 18, 2021

@ruflin There is actually a good question about policy templates. I think it's related to what @kaiyan-sheng presented here.

  1. Does it mean we should have multiple policy templates? Currently we have always one.
  2. Can the policy template represent the integration set like "basic", "advanced", "ec2", "dynamodb"?
  3. If answers to both questions above are: yes, do we really need additional groups field? A policy template could be a single group, we just need to add icons and vars there.

@kaiyan-sheng
Copy link
Contributor

@sorantis I think we should have README at both package level and policy template level. For AWS, README at package level will be an overview to the whole package, the common configuration for credentials and etc. Policy template level README will contain information(exported fields, dashboards,...) specific to the input(data stream?), such as EC2.

@mtojek
Copy link
Contributor Author

mtojek commented Feb 18, 2021

For AWS, README at package level will be an overview to the whole package, the common configuration for credentials and etc. Policy template level README will contain information(exported fields, dashboards,...) specific to the input(data stream?), such as EC2.

In this case we'll have to adjust also logic responsible for generating README files (fields, sample events, doc templates). Kibana will have to merge both READMEs into single file. I don't mind splitting, but not quite sure about the gain in splitting them. Another though, where in the directory tree should we keep them? There won't be a directory dedicated to policy sets or policy templates at all.

@kaiyan-sheng
Copy link
Contributor

@mtojek The gain for splitting them is to keep the README file shorter and more readable. This current README for AWS integration includes all data streams, exported fields, credentials but only one dashboard screenshot.
Is there a way we can share the same README file but split them into different packages in Kibana side? Before trying to solve this problem, maybe we should wait till we figure out the 3 questions you posted above first :)

@ycombinator
Copy link
Contributor

There won't be a directory dedicated to policy sets or policy templates at all.

I am in favor of creating such directories. It gives us a place to store larger assets like README files instead of defining them inline in the YAML files.

@sorantis
Copy link

sorantis commented Feb 19, 2021

Sharing my perspective on @mtojek's questions:

  1. Does it mean we should have multiple policy templates? Currently we have always one.

No we shouldn't. We could, but I think one policy_templates section should be sufficient. I don't see how we would benefit from having multiple.

  1. Can the policy template represent the integration set like "basic", "advanced", "ec2", "dynamodb"?

Yes, so each policy set is represented like this:

policy_templates:
  - name: ec2
    description: AWS EC2
    inputs:
    ...
  - name: dynamodb
    description: AWS DynamoDB
    inputs:
    ...
  - name: ebs
    description: AWS EBS
    inputs:
    ...
  1. If answers to both questions above are: yes, do we really need additional groups field? A policy template could be a single group, we just need to add icons and vars there.

The purpose of the group or tab field is to bring relevant inputs of the same type together on the same tab.

Screen Shot 2021-02-02 at 13 55 21

@mtojek
Copy link
Contributor Author

mtojek commented Feb 19, 2021

No we shouldn't. We could, but I think one policy_templates section should be sufficient. I don't see how we would benefit from having multiple.

I might have not expressed myself clearly, but you responded to the question :) Single policy_templates section with MULTIPLE policy templates.

The purpose of the group or tab field is to bring relevant inputs of the same type together on the same tab.

I wonder if we can deduce it from the input type. In this case Kibana would need to know on how to group inputs, but maybe it's an overkill.

@ruflin
Copy link
Member

ruflin commented Feb 19, 2021

The purpose of the group or tab field is to bring relevant inputs of the same type together on the same tab.

It is likely that the input type will not always work for this. Lets take redis logs and an example. Some logs come from file, others need the redis connection (slowlog). So these are 2 different inputs but I would assume they are under the same tab. At the same time it could be argued it is a bit an odd example ... Maybe we have better ones.

Directories for policy_templates

This might be useful to group screenshots together or place the README. But at the same time we could use smart defaults for the names. Like the README for EC is called EC2.md. I think for the icons it is also likely that many policy_templates will have the same icon just a different text?

@mtojek mtojek mentioned this issue Feb 19, 2021
2 tasks
@mtojek
Copy link
Contributor Author

mtojek commented Feb 19, 2021

I've started pouring out the package contents and the relevant spec in #137 . I believe it will ease reviewing.

Feel free to jump in and comment on particular lines.

@mtojek
Copy link
Contributor Author

mtojek commented Mar 1, 2021

Let me share the update on Input Groups:

We've an agreement on the structure proposal in #137 . Here is the list of next action items:

EDIT:

I moved the list into a dedicated meta-issue: #144

@mtojek
Copy link
Contributor Author

mtojek commented Mar 2, 2021

Resolving on behalf of #144

@mtojek mtojek closed this as completed Mar 2, 2021
rw-access pushed a commit to rw-access/package-spec that referenced this issue Mar 23, 2021
* HOWTO: Writing pipeline tests for a package

* Fill test case definitions

* Fix: deploy ES only

* Fix

* Fix

* Update docs/howto/pipeline_testing.md

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>

* Update docs/howto/pipeline_testing.md

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>

* Update docs/howto/pipeline_testing.md

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>

* Update docs/howto/pipeline_testing.md

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>

* Update docs/howto/pipeline_testing.md

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>

* Update docs/howto/pipeline_testing.md

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>

* Address PR comments for README

* More comments

* More comments

* File naming rules

* Why we need fields

* Fixes

Co-authored-by: Shaunak Kashyap <ycombinator@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Elastic-Agent Label for the Agent team Team:Fleet Label for the Fleet team Team:Integrations Label for the Integrations team
Projects
None yet
Development

No branches or pull requests

6 participants