Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New component: Template Receiver #26312

Closed
2 tasks
djaglowski opened this issue Aug 30, 2023 · 10 comments
Closed
2 tasks

New component: Template Receiver #26312

djaglowski opened this issue Aug 30, 2023 · 10 comments
Labels
Sponsor Needed New component seeking sponsor

Comments

@djaglowski
Copy link
Member

djaglowski commented Aug 30, 2023

Motivation

The process of "developing" a configuration for the Collector often requires
detailed knowledge of one or more collector components,
a sophisticated understanding of how to interface with an external technology,
or just a non-trivial amount of effort working through necessary data manipulations.

We should provide an easy way to capture and share useful portions of configuration.

Proposal

A new template receiver which can present users with simplified configurations. In the simplest case, a template file contains a partially preconfigured receiver. The template receiver allows the user to reference such a template file and provide only the parameters necessary to complete the configuration.

In addition to a templated configuration, the template file may contain machine-readable metadata about the template, such as a title, description, version, a schema describing the parameters accepted by the template, and a schema describing the telemetry emitted by the template. A schema which defines parameters may automatically apply default values and/or enforce type requirements (e.g. must be an int between 1-65535, or must match an enumerated list of strings).

Templating can be achieved using Go's text/template package, which allows for simple string insertions, but also more advanced control flow. (e.g. render a section for each item in a slice using for, or only render a section of config if a value indicates the need.)

In order to ensure that a template file is 1) fully self-defined, and 2) valid yaml, the structure of a template file should be defined such that the templated configuration is a raw string or byte sequence within a valid yaml schema.

Simple Example

config.yaml

receivers:
  template:
    file: my_otlp_template.yaml
    parameters:
      port: 4318

my_otlp_template.yaml

title: Simple OTLP
description: ...
version: 0.0.1
parameters:
  - name: port
    type: port # int, 1-65535
    default: 4318
template: |
  receivers:
    otlp:
      protocols:
        http:
          endpoint: localhost:{{ .port }}
          cors:
            allowed_origins:
              - http://test.com
              - https://*.example.com
            allowed_headers:
              - Example-Header
            max_age: 7200

Multiple Receivers

In many cases, it would be useful to configure multiple related receivers using a single template. For example, a database may emit logs to several distinct files, each with a different format. In this case, a template may serve as a single solution for the database by encapsulating multiple instances of the filelog receiver, each with appropriate parsing behaviors.

Example

my_mysql_template.yaml

title: MySQL
description: Log parser for MySQL
version: 0.0.1
parameters:
  - name: general_log
    type: string
    default: /var/log/mysql/general.log
  - name: error_log
    type: string
    default: /var/log/mysqld.log
  - name: start_at
    type: string
    supported:
      - beginning
      - end
    default: end
template: |
  receivers:
    filelog/general:
      include: {{ .general_log }}
      start_at: {{ .start_at }}
      multiline:
        line_start_pattern: '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z|/\w+/\w+/mysqld,'
      operators:
        - type: regex_parser
          regex: '(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z)\s+(?P<tid>\d+)\s+(?P<command>\w+)(\s+(?P<message>(?s).+\S))?'
          timestamp:
            parse_from: attributes.timestamp
            layout: '%Y-%m-%dT%H:%M:%S.%sZ'
    filelog/error:
      include: {{ .error_log }}
      start_at: {{ .start_at }}
      multiline:
        line_start_pattern: '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z'
      operators:
        - type: regex_parser
          regex: '(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z)\s+(?P<tid>\d+)\s+\[(?P<mysql_severity>[^\]]+)]\s+(?P<message>[\d\D\s]+)'
          timestamp:
            parse_from: attributes.timestamp
            layout: '%Y-%m-%dT%H:%M:%S.%sZ'

Processors

In some cases it would be helpful to apply predefined processor configurations to data ingested by receivers.

The addition of processors complicates the templated configuration somewhat in the sense that we may now also need to include partial pipelines in the template as well. This allows a defined order of processors, as well as explicitly declared relationships between recievers and processors. It should not be necessary (or even allowed) to declare exporters, as it should be understood that the template receiver will "export" according to its position in the top-level service graph.

Example

my_mysql_template.yaml

title: With Partial Pipelines
parameters: ...
template: |
  receivers:
    filelog/general: ...
    filelog/error: ...
  processors:
    filter/min_error_level: ... # only apply to general
    transform: ... # applies to both
  pipelines:
    logs/general:
      receivers: [filelog/general]
      processors: [filter/min_error_level, transform]
    logs/general:
      receivers: [filelog/error]
      processors: [transform]

Multiple Data Types

It may be helpful for a single templated configuration to encapsulate solutions for multiple data types. For example, reading logs and also scrape metrics.

title: With Multiple Data Types
parameters: ...
template: |
  receivers:
    filelog: ...
    sqlquery: ...
  processors:
    batch: ...
    filter: ...
  pipelines:
    logs:
      receivers: [filelog]
      processors: [batch]
    metrics:
      receivers: [sqlquery]
      processors: [filter, batch]

Implemenation Details

Templated Pipelines

Partial pipelines should be defined within the templated configuration because in some cases a template may generate entire receivers or processor configs. (e.g. given a list of nodes, generate a receiver config corresponding to each node) In such cases, it would be necessary to also generate the corresponding partial pipelines.

Data Type Support / Validation

The template receiver will have to declare that it supports all data types. However, a given template may support any subset of types. Therefore, it is not possible to validate correct usage of the template receiver until the factory attempts to build the receiver given a specific config.

When a factory function is called, e.g. CreateMetricsReceiver, the factory can do the following:

  1. Read in the template file.
  2. Render the template, using the parameters specified in the config.
  3. Unmarshal the rendered template into a struct which contains the component configs and the partial pipeline configs.
  4. Inspect the pipeline IDs. If there are no metrics pipelines, then fail.

Internal Collector Instance

Each instance of the template receiver should construct and manage its own internal collector instance. In order to do this, it will render the templated config and complete the partial pipelines by attaching an "exporter" which will simply pass data through to the main service graph.

Open Questions / Future Work

Internal Service Graph

Currently, I believe that such a solution necessarily requires each instance of the template receiver to construct and manage its own instance of otelcol.Collector, but it would be better to reduce the scope of responsibility for the component to manage. For example, it own-telemetry configuration should not be a concern of the template receiver, as it should inherit these settings from the collector as a whole. However, it's not clear to me that this is possible currently. Possibly, this is a use case that favors open-telemetry/opentelemetry-collector#8111.

Template Processor / Template Exporter

I believe in the future it may also be beneficial to create template processor and template exporter components.

A templated processor would allow for several predefined processing operations to be packaged together. For example, migration of data from one version of semantic conventions to another could be written purely in configuration and easily shared.

A template exporter would allow us to prepend common processing steps onto one or more exporters.

Telemetry data types supported

All

Is this a vendor-specific component?

  • This is a vendor-specific component
  • If this is a vendor-specific component, I am proposing to contribute and support it as a representative of the vendor.

Code Owner(s)

No response

Sponsor (optional)

No response

Additional context

This proposal is based on observIQ's pluginreceiver.

@djaglowski djaglowski added Sponsor Needed New component seeking sponsor needs triage New item requiring triage labels Aug 30, 2023
@atoulme atoulme removed the needs triage New item requiring triage label Aug 30, 2023
@codeboten
Copy link
Contributor

As per the discussion in the SIG call on Aug 30, could this functionality be provided either by the existing yaml provider (https://github.com/open-telemetry/opentelemetry-collector/tree/main/confmap/provider/yamlprovider) or by a new template provider?

@djaglowski
Copy link
Member Author

As per the discussion in the SIG call on Aug 30, could this functionality be provided either by the existing yaml provider (https://github.com/open-telemetry/opentelemetry-collector/tree/main/confmap/provider/yamlprovider) or by a new template provider?

I spent a couple hours digging into the confmap package. While there's a lot I don't yet, I think I see a possible path. Perhaps someone could validate this alternate approach.


For the sake of simplicity, let's suppose that we would have a template file that contains nothing but a "template type" and a single templated receiver configuration. Roughly:

type: my_filelog_template
template: |
  filelog:
    include: {{ .my_log_file }}

A new template Provider could read this template file, but it could not render the template because it does not have access to the necessary parameters. It must return a map[string]any. Let's say we return:

map[string]any{
  "templates": {
    "my_filelog_template":  "filelog:\n  include: {{ .my_log_file }}",
  },
}

From there, this is automatically converted into a Conf, and then automatically merged onto an aggregate Conf.

Then, the aggregate Conf is run through a series of Converters. I think this is the first opportunity where we could have both the template and the parameters needed to render it, so we'll need a new template Converter as well.

This converter basically would crawl the Conf.Sub("receivers") looking for a particular key or key format. Let's say it finds template/my_filelog_from_template. Then it does the following:

  1. Unmarshal the value into something like:
type ConfTemplate struct {
  Type string
  Parameters map[string]any
}
  1. Fetch the corresponding template from the aggregate config:
tmpl := aggregateConf.Sub("templates").Sub("my_filelog_template")
  1. Render the template using the Parameters. (Parameter validation could be applied here)
  2. Unmarshal the rendered template into a Conf, essentially acting as another provider here?
  3. Overwrite the value corresponding to template/my_filelog_from_template with the rendered Conf

This assumes that the user has correctly used template/my_filelog_from_template in one or more pipelines.

I think this makes some sense, but do I seem to be missing anything?


Notably, the above leaves out the ability to template multiple related components together. I think this would require different logic:

  1. Crawl Conf.Sub("receivers") looking for template/*. Let's say we find template/my_multicomponent_from_template
  2. Fetch the corresponding template
  3. Render the template
  4. Unmarshal the rendered template into something like:
type TemplateReceiver struct {
  Receivers map[component.ID]confmap.Conf
  Processors map[component.ID]confmap.Conf
  PartialPipelines []PartialPipeline
}
type PartialPipeline struct {
  Receivers []component.ID
  Processors []component.ID
}
  1. Delete template/my_multicomponent_from_template from the aggregate Conf
  2. Append a unique ID to each element of TemplateReceiver.Receivers (e.g. filelog/my_multicomponent_from_template/r1). Insert the rendered Conf it into the aggregate Conf.Sub("receivers") under this ID.
  3. Append a unique ID to each element of TemplateReceiver.Processors (e.g. filelog/my_multicomponent_from_template/p1). Insert the rendered Conf it into the aggregate Conf.Sub("processors") under this ID.
  4. Crawl all preexisting pipelines in the aggregate Conf.Sub("pipelines") and find any references to this instance of the template receiver. Replace with a forward/my_multicomponent_from_template connector.
  5. For each PartialPipeline, generate a full pipeline from its Receivers and Processors, and add forward/my_multicomponent_from_template as the exporter. Generate a unique ID for each (e.g. logs/my_multicomponent_from_template/1) and insert it into the aggregate Conf.Sub("pipelines").

@djaglowski
Copy link
Member Author

I got inspired and put together a draft of my design as a template provider and converter working in combination. See open-telemetry/opentelemetry-collector#8344

I think it's a better solution in several ways.

There are a few aspects to work through still but I believe it's ready for some feedback if anyone can take a look.

@mx-psi
Copy link
Member

mx-psi commented Sep 1, 2023

Just so that we know what we currently support: one way of doing something similar today would be something like the following.

Given the 'template':

# template.yaml
receivers:
  filelog/general:
    # include: {{ .general_log }}, if you want a default just define this option
    # start_at: {{ .start_at }}, if you want a default just define this option
    multiline:
      line_start_pattern: '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z|/\w+/\w+/mysqld,'
    operators:
      - type: regex_parser
        regex: '(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z)\s+(?P<tid>\d+)\s+(?P<command>\w+)(\s+(?P<message>(?s).+\S))?'
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%dT%H:%M:%S.%sZ'
  filelog/error:
    # include: {{ .error_log }}, if you want a default just define this option
    # start_at: {{ .start_at }}, if you want a default just define this option
    multiline:
      line_start_pattern: '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z'
    operators:
      - type: regex_parser
        regex: '(?P<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d+Z)\s+(?P<tid>\d+)\s+\[(?P<mysql_severity>[^\]]+)]\s+(?P<message>[\d\D\s]+)'
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%dT%H:%M:%S.%sZ'

You can provide another 'parameter' file:

# parameters.yaml
receivers:
  filelog/general:
    include: /var/log/mysql/general.log
    start_at: end
  filelog/error:
    include: /var/log/mysqld.log
    start_at: end

or if you want a flatter structure:

# parameters.yaml, second version
receivers::filelog/general::include: /var/log/mysql/general.log
receivers::filelog/general::start_at: end
receivers::filelog/error::include: /var/log/mysqld.log
receivers::filelog/error::start_at: end

and then you merge the files by passing:

./myotel-binary --config template.yaml --config parameters.yaml

IMO we need to think carefully as to what is it that we don't like about the current state and proceed after that. With this approach we have type validation, support for defaults, support for any section of the configuration (be it receivers or anything else including e.g. service::telemetry)... We lack the ability to rename parameters and some usability concerns (e.g. if you get the CLI argument order wrong defaults won't work).

@djaglowski
Copy link
Member Author

@mx-psi, thanks for highlighting the current capabilities.

Ultimately, my goal here is to provide a mechanism that unlocks better usability. It's not just a matter of reducing the number of parameters which a user must consider. I think a templating system should allow collector and/or usability experts to create well defined abstractions which can be provided to newer users.

In my opinion, the config merging which you've highlighted may be enough in some specific cases but I think there are a lot of limitations to the approach. I'll highlight several others below, but most importantly I think it is not an abstraction for the following reasons:

  1. The component names are tightly coupled. The user must name their receivers to exactly match those defined in the template file. Without some other mechanism of abstraction, this requires the user to inspect the template file in detail to understand which components it defines. (At this point, I think it would also be natural for a user to simply configure the necessary parameters in this file.)
  2. Due to 1, it is possible for component names to collide. e.g. User defines filelog/error and then later decides to incorporate the template. Now they may have trouble with unexpected config merges.
  3. The template file is not reusable within a single collector. If the user wants another instance of filelog/general, they must copy and/or edit the template.
  4. As you noted, order which configs are specified at the command line is a complication that the user must consider. This means they must have some understanding of the notion of merging configs, how to reason about their order, and likely some ability to troubleshoot interactions between configs.

Compared to the above, a template provider/converter would allow users to reason about templates in a much more abstract manner, based only inputs and outputs.


In addition to the differences noted above, there are several other capabilities to a templating system which cannot currently be solved via config merging.

  1. A template may include custom errors, control flow, iteration, simple functions (e.g. len), and potentially custom functions (e.g. toUpper).
  2. Templates may contain multiple components and pipelines.
  3. The components and/or pipelines included in a template may be dynamically generated.
  4. Template parameters may be reused in multiple places and/or across multiple component within the template.

config.yaml

receivers:
  template/foo:
    nodes: [a, b, c, d]
    groups: [x, y, z]
    bar: true

template.yaml

type: foo
template: |
  {{ if not .nodes }}
    {{ error "Please specify at least one 'node'!" }}
  {{ end }}

  receivers:
    {{ range $i, $node := .nodes }}
    some_node_receiver/{{ $i }}:
      hostname: {{ $node }}
      include_bar: {{ .bar }}
    {{ end }}

  {{ if (gt (len .groups) 0) }}
  processors:
    {{ range $i, $group := .groups }}
    some_processor/{{ $i }}:
      do_something_if: attributes.group_name == {{ toUpper $group }}
    another_processor//{{ $i }}:
    {{ end }}
  {{ end }}

  pipelines:
    {{ range $i, $node := .nodes }}  
    logs/{{ $i }}:
      receivers: [ some_node_receiver/{{ $i }} ]
      {{ if (gt (len .groups) 0) }}
      processors:
        {{ range $i, $group := .groups }}
        - some_processor/{{ $i }}:
        - another_processor/{{ $i }}:
        {{ end }}
      {{ end }}
    {{ end }}

we have type validation

I see these as a secondary concern for templates because documentation of parameters and simple examples can show users how to use them. That said, I think we could increase usability further by having a well defined parameter type system.

support for defaults

I think we can separate the notion of defaults into two groups:

  1. Defaults for component parameters. Templates would ultimately render into components, so we would naturally have the same defaults for the first group (except where a template overrides it).

  2. Defaults for template parameters. Again, I see this as secondary but if we formalize the notion of template parameters this would be very easy to include. Without formalization, templates could still apply their own easily enough:

{{ if not .foo }}
  {{ $foo := "bar" }}
{{ end }}

support for any section of the configuration (be it receivers or anything else including e.g. service::telemetry)

I think the proposed implementation in open-telemetry/opentelemetry-collector#8344 could be adapted easily enough to support templating of other component types but I agree it is not intended to generalize to all parts configuration.


To broaden the context of my proposal, I recall the following requests for which I believe templates would be a solution:

@mx-psi
Copy link
Member

mx-psi commented Sep 6, 2023

In my opinion, the config merging which you've highlighted may be enough in some specific cases but I think there are a lot of limitations to the approach. I'll highlight several others below, but most importantly I think it is not an abstraction for the following reasons:

(1) The component names are tightly coupled. The user must name their receivers to exactly match those defined in the template file. [...]
(2) Due to 1, it is possible for component names to collide. e.g. User defines filelog/error and then later decides to incorporate the template. Now they may have trouble with unexpected config merges.
(4) As you noted, order which configs are specified at the command line is a complication that the user must consider. [...]

For (1), (2) and (4), I agree that the current configuration resolution system is lacking. I think the solution, if possible, should build upon the existing confmap resolver instead of adding a separate templating system. It may be that we can't do that (I am currently not convinced that is the case), but it is important we know why we are not extending the existing system if we don't do that in the end.

(3) The template file is not reusable within a single collector. If the user wants another instance of filelog/general, they must copy and/or edit the template.

For (3), we can already reuse configuration for multiple components. Something like this works today:

receivers:
  filelog/general: ${file:/path/to/reusable/filelog/definition.yaml}
  filelog/custom: ${file:/path/to/reusable/filelog/definition.yaml}

One would have to specify the parameters twice (and would still suffer from issues (1) & (4)) but this kind of reusability exists today to some extent.


In addition to the differences noted above, there are several other capabilities to a templating system which cannot currently be solved via config merging.

(2) Templates may contain multiple components and pipelines.
(4) Template parameters may be reused in multiple places and/or across multiple component within the template.

On the other templating capabilities, I think (2) and (4) are supported by the current system. We can support multiple components and pipelines, and we can reuse pipelines in multiple places. What capabilities are lacking?

(1) A template may include custom errors, control flow, iteration, simple functions (e.g. len), and potentially custom functions (e.g. toUpper).
(3) The components and/or pipelines included in a template may be dynamically generated.

I can see the appeal of (1) and (3), but I am not convinced about supporting them natively on the Collector. Having custom functions or loops feels definitely like the job of something like Helm, or a configuration management tool like Chef/Puppet/Saltstack/Ansible, not of the Collector. It's unclear where to draw the line, but IMO our role should be to provide configuration capabilities that help basic users and that interact well with configuration management systems, and leave more advanced capabilities to those systems. Personally, I feel like improving abstraction on the current system is still okay in that it helps with this interaction but I would draw the line there (but I would like to see what other community members think).

@djaglowski
Copy link
Member Author

(2) Templates may contain multiple components and pipelines.
(4) Template parameters may be reused in multiple places and/or across multiple component within the template.

On the other templating capabilities, I think (2) and (4) are supported by the current system. We can support multiple components and pipelines, and we can reuse pipelines in multiple places. What capabilities are lacking?

Re (2), you can define multiple components and pipelines in a single file with the current system, but can you abstract them so that the user feels they are working with one simple component? My understanding is that you can't really do this.

Re (4), can you show me how this is done? I like to think I'm a bit more familiar than the typical user of the collector but I still don't see it.

receivers:
  filelog/general: ${file:/path/to/reusable/filelog/definition.yaml}
  filelog/custom: ${file:/path/to/reusable/filelog/definition.yaml}

I see, the file would contain only the parameters, but the context would have to be managed some other way. This seems like a poor usability experience which again requires the user to have a deep understanding of config layering.

(1) A template may include custom errors, control flow, iteration, simple functions (e.g. len), and potentially custom functions (e.g. toUpper).
(3) The components and/or pipelines included in a template may be dynamically generated.

I can see the appeal of (1) and (3), but I am not convinced about supporting them natively on the Collector. Having custom functions or loops feels definitely like the job of something like Helm, or a configuration management tool like Chef/Puppet/Saltstack/Ansible, not of the Collector. It's unclear where to draw the line

The line I am proposing is that our templates would use Go's text/template and support only what that package gives us. Perhaps at a future point someone would provide a clear enough need to draw a new line, but I'm not proposing that now. The loops and such are basically free capabilities which we would get even if the goal was only to support simple parameter substitution. I know for certain that they would be very useful, but it shouldn't be additional maintenance burden if that's the concern.

I think the solution, if possible, should build upon the existing confmap resolver instead of adding a separate templating system. It may be that we can't do that (I am currently not convinced that is the case), but it is important we know why we are not extending the existing system if we don't do that in the end.

This issue was originally a proposal for a receiver but given the feedback I received I worked out an implementation that is fully based in the confmap package. It defines a new template scheme, a new template provider to read and validate the files, and a template converter to render the template and merge it into the global config. It requires no changes to the resolver itself, but works within the order that the resolver manages (i.e. read all configs using providers, then run the global conf through a series of converters). See open-telemetry/opentelemetry-collector#8344. I like this approach better than the receiver, so perhaps we should move discussion to a new issue, or to that PR. In any case, the implementation, in my opinion, is a natural way to extend the existing system.

@djaglowski
Copy link
Member Author

I've rebooted this issue with the updated design here: open-telemetry/opentelemetry-collector#8372

@cwegener
Copy link
Contributor

cwegener commented Sep 6, 2023

Great points raised.

Having custom functions or loops feels definitely like the job of something like Helm, or a configuration management tool like Chef/Puppet/Saltstack/Ansible, not of the Collector. It's unclear where to draw the line, but IMO our role should be to provide configuration capabilities that help basic users and that interact well with configuration management systems, and leave more advanced capabilities to those systems.

I think this is the sticky part IMO.

Is it OK for OTEL to implicitly create dependencies to config management systems? To me, this sounds like a restriction how I as an OTEL user can make use of the collector. What if an external config management system has no place in my architecture?

@djaglowski
Copy link
Member Author

Closing in favor of open-telemetry/opentelemetry-collector#8372

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Sponsor Needed New component seeking sponsor
Projects
None yet
Development

No branches or pull requests

5 participants