Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] Supporting multiple .river files for Grafana Agent Flow #2560

Closed
payparain opened this issue Nov 29, 2022 · 8 comments
Closed

[Request] Supporting multiple .river files for Grafana Agent Flow #2560

payparain opened this issue Nov 29, 2022 · 8 comments
Assignees
Labels
enhancement New feature or request flow Related to Grafana Agent Flow frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed.
Milestone

Comments

@payparain
Copy link
Contributor

Currently Grafana Agent Flow can only accept a single file for configuration of the Agent when first started (i.e. agent run <file>). It would be great if either River could support importing other .river files into a single entrypoint, or if Agent could accept multiple .river files on start up.

The use case for this is when using Agent Flow in a multi-user organisation, different teams or individuals work on disparate pieces of a single deployment, and it becomes unwieldy to develop against a single file for defining how to scrape all metrics for a deployment. By supporting the ability to use multiple .river files, these teams can each maintain their own .river configuration that can individually be provided to the Agent on a deployment, where each team might point their metrics to a common exporter block. This lowers the impact of a single team potentially having a bad config, as it only takes out their stream of data and not the whole graph.

@rfratto
Copy link
Member

rfratto commented Nov 29, 2022

We've been thinking of a way to support including other River files to support code reuse (e.g., something similar to Terraform modules). It feels like this could fit into those plans a little bit (cc @mattdurham).

One of the things we have to decide is what the behavior of loading multiple River files is:

  • Do all the files get loaded into the same graph? (If so, label names have to be globally unique across all loaded files)
  • Do all the files get loaded into separate graphs? (If so, components across different files can't interact with one another)

It sounds like you're suggesting the first approach, right?

@mattdurham
Copy link
Collaborator

My solely personal opinion is that in general other files are loaded as subgraphs with dedicated arguments and exports to interact with the parent graph. This is something the team has talked about loosely and I plan on doing a more formal RFC soon(tm). @payparain if you have some time would love to get more details about your specific use case and to see how modules (name subject to change) can help with those.

There is also a side use case of dynamic components that are a bit further down the line, but I think it is a different use case than the one your are suggesting.

@payparain
Copy link
Contributor Author

It sounds like you're suggesting the first approach, right?

Hey @rfratto, correct, when writing this I was imagining extra files forming subgraphs that eventually all link into one large graph, which would mandate unique labels globally. The alternative isn't as suitable for what I was hoping as the ability to support multiple files writing to the same prometheus.remote_write was where this idea stemmed from, but I haven't thought too deeply about it.

if you have some time would love to get more details about your specific use case and to see how modules (name subject to change) can help with those.

Hey @mattdurham, sure! The specific use case I find myself in is coordinating monitoring across a micro-services architecture where different services are developed by different teams. The hope was to have a core set of exporters (prometheus.remote_write etc) in a single place that these teams would be able to reference by label (since all the data would be going to a common destination) in their own .river files kept with their code. A single parent file would then either import these extra .river files, or the agent would load them all and wire the components up as if they were all a single file.

This set up would allow development teams to customise the data coming out of their services without colliding with other teams due to working on the same file across multiple branches. Obviously, there are other ways around this problem; we could for example standardise the instrumentation and relabel at the application level, but as the end goal is to have each team "own" their observability giving the teams the ability to relabel and format their data as required would be a big boon. This additionally allows the flexibility for working around pieces of the architecture that are still worked on that don't fit into a standard mold, where relabelling is the only way forward (for example, if we used an off the shelf prometheus exporter).

Thanks for the replies on this, hopefully what I'm saying makes sense :)

@mattdurham
Copy link
Collaborator

Modules are meant to handle the use case of Agent-as-a-service, where you run a central set of Agents and teams can submit configs and load dynamically. The below is what we are envisioning (this is entirely in design phase so its all subject to change)

// Main river file
prometheus.remote_write "endpoint" {

}

s3.files "configs" {
    source = "s3://bucket/folder_with_river_files/"
}

module.loader.array "files" {
    sources = s3.files.configs.files
    arguments = {
        forward_to = [prometheus.remote_write.endpoint.receiver]
    }
}
  
// Example file in the s3 bucket 
argument "forward_to" {
    required = true
}
    
prometheus.scraper "custom_app_scraper" {
    forward_to = [argument.forward_to.value]
}

The modular array instantiates a subgraph for each file input, and each input would use the same set of arguments (there is further discussion around on if it needs to, if it can be a subset, optional vs required etc). Keys, secrets, etc could be passed in as an argument so that individual teams dont have access.

@payparain
Copy link
Contributor Author

As a solution that seems ideal for the use case I envisioned. Great to see it's on the board somewhere, and times being taken to get it right. Thanks for the update!

@tpaschalis
Copy link
Member

cc @mattdurham do you think we should close this issue with the introduction of module.file?

@mattdurham
Copy link
Collaborator

Yes, modules are our answer to this and are being released so this feels safe to close.

@532910
Copy link

532910 commented Feb 21, 2024

module.file does not allow to load multiple files.
Please reopen this issue.
What exactly I'm trying to is to split matchers from prometheus.exporter.process into several files.

@github-actions github-actions bot added the frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed. label Mar 23, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 23, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request flow Related to Grafana Agent Flow frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed.
Projects
No open projects
Development

No branches or pull requests

6 participants