Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Expose permissions as part of the agent policy #94591

Merged
merged 21 commits into from
Mar 30, 2021

Conversation

afgomez
Copy link
Contributor

@afgomez afgomez commented Mar 15, 2021

Summary

Closes #94058

Fleet server needs to generate an API key for each agent so it can communicate with elasticsearch. As of today, these API keys have coarse permissions; every agent can write on any logs-*,metrics-*,traces-* index. We want to narrow down the permissions for each agent based on their integrations.

This PR does some groundwork to move forward this goal.

1. Specify how to add permissions to the agent policy,

To do so we need to specify in the agent policy which permissions are needed for that agent. We have added a outputPermissions key to the agent containing this information.

The type signature is as follows:

// Same structure as https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html#security-api-create-api-key-example
type Permission {
  names: string[];
  privileges: string[];
}

interface FullAgentPolicy {
  // ...
  // As the time of writing this PR the only possible output is `elasticsearch`,
  // but that might not always be the case. There might be outputs that don't support
  // permissions at all, so we make the property optional.
  output_Permissions?: {
    // Only `default` for now, since it's the only `output` that we have
	[outputKey: string]: {
      // Each integration will specify their own permissions.
      // This PR uses `fallback` as an "integration" for now.
      [roleForIntegration: string]: {
        indices: Permission[];
      }
    } 
  }
}

2. Move current hardcoded permissions from fleet server to the policy itself.

Since the integrations don't specify their permissions yet, we will use the current broad permissions in the policy, under the fallback key.

Checklist

Delete any items that are not applicable to this PR.

For maintainers

@afgomez afgomez added Feature:Fleet Fleet team's agent central management project Team:Fleet Team label for Observability Data Collection Fleet team labels Mar 15, 2021
Alejandro Fernández Gómez added 3 commits March 16, 2021 15:21
For now we will only have a `fallback` role containing the current
permissions. Eventually the permissions needed for each integration will
be specified as individual roles.
@ruflin
Copy link
Member

ruflin commented Mar 18, 2021

I was probably too quick to test this as it is still in draft. I was curious how the full output currently looks and was opening the yaml view but got the following error:

image

@afgomez
Copy link
Contributor Author

afgomez commented Mar 18, 2021

I was curious how the full output currently looks and was opening the yaml view but got the following error

Yeah sorry about that. The code had a silly error. It's fixed now

@aleksmaus
Copy link
Member

#94058 (comment)

@afgomez afgomez marked this pull request as ready for review March 24, 2021 13:45
@afgomez afgomez requested a review from a team as a code owner March 24, 2021 13:45
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Feature:Fleet)

Alejandro Fernández Gómez added 4 commits March 25, 2021 10:13
@nchaulet nchaulet added release_note:skip Skip the PR/issue when compiling release notes v7.13.0 v8.0.0 labels Mar 25, 2021
@aleksmaus
Copy link
Member

aleksmaus commented Mar 25, 2021

testing with osquerybeat that ships the data using libbeat, the fallback permissions don't seem to be sufficient

{"log.level":"error","@timestamp":"2021-03-25T10:46:57.811-0400","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/output.go","file.line":154},"message":"Failed to connect to backoff(elasticsearch(http://localhost:9200)): 403 Forbidden: {\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"action [cluster:monitor/main] is unauthorized for API key id [SsuOaXgBlpA2k-27cUU0] of user [elastic], this action is granted by the cluster privileges [monitor,manage,all]\"}],\"type\":\"security_exception\",\"reason\":\"action [cluster:monitor/main] is unauthorized for API key id [SsuOaXgBlpA2k-27cUU0] of user [elastic], this action is granted by the cluster privileges [monitor,manage,all]\"},\"status\":403}","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2021-03-25T10:46:57.819-0400","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/output.go","file.line":145},"message":"Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 1 reconnect attempt(s)","ecs.version":"1.6.0"} 

the missing "cluster": ["monitor"], probably is the reason.

@ruflin
Copy link
Member

ruflin commented Mar 26, 2021

I think to get this moving, lets add cluster: ['monitoring'] back in so we can continue with the implementation. But I would like to understand why we ned monitoring cluster permissions.

@scunningham
Copy link

@urso Can we get rid of monitor privs? I see at least one case of the /_xpack API being called:

https://github.com/elastic/beats/blob/master/libbeat/monitoring/report/elasticsearch/client.go#L73

We probably do not want the endpoint to have that much information about the output cluster.

@urso
Copy link

urso commented Mar 26, 2021

Beats in xpack (the ones used for agent) will always query Elasticsearch for the right license, but I'm not sure if this API call really requires the monitoring permissions. At least my assumption was 'no'.

Originally monitoring permissions have been required to check if the template, policy, index, or alias do exist. But the agent should create a configuration that suppresses these checks. If monitoring is still required for the license check, we might consider to disable the check via agent as well (and maybe have agent do the check).

@ruflin
Copy link
Member

ruflin commented Mar 29, 2021

To make sure we don't block this PR, I suggest we readd the monitoring permissions in this PR but follow up with trying to get it removed.

@afgomez afgomez requested a review from ruflin March 29, 2021 11:41
@kibanamachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
fleet 344.4KB 344.4KB +21.0B

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Copy link
Member

@nchaulet nchaulet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

Copy link
Member

@ruflin ruflin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. nice job!

@@ -167,15 +174,18 @@ export async function createAgentActionFromPolicyAction(
}
);

// agent <= 7.9 uses `data.config` instead of `data.policy`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nchaulet one to cleanup as soon we will not support 7.9 agents anymore.

@dikshachauhan-qasource
Copy link

Hi @jen-huang

If any testing is required to be done on this ticket from our end, could you please let know more details about how to validate this.

Thanks
QAS

@jen-huang
Copy link
Contributor

Hi @dikshachauhan-qasource, no specific for this for now, the risk areas here would be covered by existing test cases.

@dikshachauhan-qasource
Copy link

Ok Thanks for the confirmation @jen-huang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Fleet Fleet team's agent central management project release_note:skip Skip the PR/issue when compiling release notes Team:Fleet Team label for Observability Data Collection Fleet team v7.13.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Fleet] Add permission specs to policy
10 participants