Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: First class support for SSM parameter store as a secrets store #1209

Closed
tomelliff opened this issue Jan 23, 2018 · 63 comments
Closed

Comments

@tomelliff
Copy link

Summary

Use SSM parameter store as a secret store in a similar style to Kubernetes secrets.

Description

SSM parameter store currently works as a really low cost of entry secret store that allows applications with the appropriate IAM permissions to fetch and decrypt secrets at a given path. It's also used commonly by people running ECS who want to inject secrets into their containers at run time (to avoid baking them into containers and allowing for environment specific secrets) and recommended in some AWS blog posts.

Unfortunately this requires the application to fetch the secrets from SSM parameter store at start up or, probably more commonly, rely on an entrypoint script to fetch the secrets using the AWS CLI or something like confd.

This is a bit fiddly and also gets in the way when using a third party Docker image that has a useful entrypoint script that you don't want to have to extend to avoid drift between the official image's entrypoint script and your fork.

Ideally ECS would support having the ECS task read parameters from SSM parameter store at startup and inject them as environment variables or as volume mounts in tmpfs similar to Kubernetes.

I'm picturing something like this as a task definition:

{
  "containerDefinitions": [
    {
      "name": "hello-world",
      "image": "123456789012.dkr.ecr.eu-west-1.amazonaws.com/hello-world:v1",
      "memory": 200,
      "cpu": 10,
      "essential": true,
      "environment":  [
        {
          "name": "GREETING",
          "value": "Hello"
        },
        {
          "name": "RECIPIENT",
          "valueFrom": "/production/hello-world"
        },
      ]     
    }
  ],
  "family": "hello-world",
  "taskRoleArn": "arn:aws:iam::123456789012:role/HelloWorld"
}

where /production/hello-world is an SSM parameter that is optionally encrypted.

Note that the task's IAM role must have permission to get the SSM parameter store (and decrypt it with the appropriate KMS key if it's encrypted).

This allows the ECS task's secrets to be separated from the application code and deployment parts and also means that users with read only access to the task definition can't easily view secrets that are injected as plain environment variables.

@nathanpeck
Copy link
Member

Thanks for the feature request! It's added to our internal feature request tracker. :)

@michalgasek
Copy link

+1

1 similar comment
@dsouzajude
Copy link

+1

@samuelkarp
Copy link
Contributor

Hey @tomelliff, thanks for opening this issue! Secrets is one of the topics we've been thinking about internally and using SSM Secure Parameters is one of the implementations that we've started to look at. However, it's useful to think about this from the perspective of what the trust zones are and how that plays into the choice of appropriate technologies.

Containers make it easy to have some levels of isolation and tend to be fairly secure by default (cgroups and namespaces plus a seccomp policy applied to limit syscalls). However, containers are also really flexible in that these security mechanisms can all be tuned or turned off, and we make those knobs available to you in your ECS task definition as well (the "privileged" and "linuxCapabilities" parameters, as well as the "securityOptions" parameter when you have a compatible Linux Security Module enabled).

In ECS, we make it fairly easy to deploy multiple tasks to the same EC2 instance. If your security model is such that you want each task to be isolated from others, you should use the default configuration and not enable additional privileges for your tasks. If your security model is that the cluster represents how applications should be isolated from each other, you can take advantage of the isolation that EC2 instances provide to prevent one cluster from impacting another cluster.

Here are some of the things we think about:

  • API: How to put in your secrets is an open question. We could make part of the task definition available for secure parameters, but then you'd have to update the task definition every time your secret changed and redeploy. If we did this, we'd also have to decide whether the permission controlling viewing a task definition (ecs:DescribeTaskDefinition) should also control access to the secret. Or, we could make it so you could reference a secure parameter ARN in the task definition, but then you'd have to make multiple API calls instead of just one. And in either way, we'd have to decide whether it makes sense for this to be overridable in StartTask/RunTask.

  • Permissions: If ECS starts retrieving secrets for you, we need to make sure that we model this in IAM in a sensible way that gives you control. Probably the most important consideration here is figuring out which IAM role ECS should use for retrieval. We might be able to use the task execution role (built for ECR image pull and log delivery in Fargate), the EC2 instance profile role (though the EC2 instance profile is accessible by all the tasks running on the same instance), or the ECS service-linked role (though the policy applied is then controlled by AWS). Or we might need to add a new role.

  • Environment variables: These are super easy to use, but they have a few drawbacks that are exacerbated in containers when you have mixed use of the underlying operating system or when you're trying to ensure that processes/containers on the same underlying EC2 instance cannot share information. The most important drawback is that secrets embedded in environment variables can potentially be viewed without appropriate authentication or authorization. You can see them in docker inspect output if you have access to talk to the Docker daemon (meaning that you're running as root or you've been added to the POSIX group controlling access), you can see them in /proc if you're root, and it's easy to unintentionally expose them by logging all of your environment variables. And since containers frequently run as root, it's possible to view the environment variables of one container from another if you mount in the Docker daemon's socket. Another consideration here is changes at runtime (secret rotation); while it is possible for a process to change its own environment variables, it is not possible for another process to do so, meaning that it would require a process to be restarted in order to rotate secrets.

  • Volumes: Volumes could be a better approach than environment variables because they don't share the same drawbacks; the secrets wouldn't be visible in docker inspect or in /proc, there is a way to get notified of a file changing, and they don't directly encourage caching forever. However they do have drawbacks of their own: namely, secrets at rest still need to be secured – we don't want to unintentionally expose your secrets if you take a snapshot of the volume where they're stored. A tmpfs might be an approach here, but we'd still want to think about whether secrets would be written to disk in the case of swapping. A ramdisk could work, but then there's another device to manage and it makes installing the ECS agent more difficult. An approach that protects secrets at-rest on disk (like encryption) might be a good option here.

  • Observability: An auditing mechanism to let you know when secrets are accessed can be a really good tool for defense-in-depth. Recording when, who, and from where a secret was accessed can help you find if inappropriate access occurs. It would be pretty hard to implement an audit log for environment variables (since you can't really intercept a call to read one) and would be somewhat less than straightforward to do with a volume-based approach (you could probably do it with a FUSE filesystem, but now you have even more complexity to manage). It'd be nice to have an approach that allows us to offer an audit log.

There are also some other implementations here that inform the way we think about this:

  1. Task IAM roles for ECS — We built a feature to vend IAM role credentials to applications in containers as part of ECS, and this has a lot of the same concerns as general secret management. We ended up building an HTTP endpoint that would be exposed into the container on a well-known IP address and updating each of the AWS SDKs to retrieve credentials from that endpoint. In order to use the same endpoint for multiple containers with different roles, we required that the HTTP request include a token to identify the caller; we pass the token to the container via an environment variable. We log every access to an audit log that records the timestamp and the IP address where the request originated; the audit log thus provides a way to monitor and observe whether unauthorized access occurs to the credential endpoint.

  2. SSM Secure Parameters — Even though we're talking about integrating with SSM Secure Parameters here, the way that Secure Parameters works is itself a good example. SSM created a well-defined API to retrieve parameters/secrets, enforces IAM permissions (authentication via AWS SigV4 and authorization via IAM policies), and records audit information in CloudTrail. While it does require adopting its API in your application, you do get all the benefits of the system's design to securely store and vend your secrets.

With all of that said, we're going to have to look at this from the perspective of making useful trade-offs; a ton of existing software expects secrets to be available in either environment variables or in files and we do want to make this feature useful with existing software too.

I hope this helps shed some light on our thought process. We'd love feedback on everything I've written here to help guide our approach when we do build secret distribution.

Sam

@andriyfedorov
Copy link

+1

@ekini
Copy link

ekini commented May 3, 2018

Meanwhile, I use https://github.com/springload/ssm-parent as an entrypoint for Docker to inject SSM parameters into a process environment.

@endofcake
Copy link
Contributor

https://github.com/glassechidna/pstore is another tool which can be used as an entrypoint to get parameters from SSM.

@robmorgan
Copy link

I'm also mentioning Chamber as I use it quite a bit: https://github.com/segmentio/chamber

@mwarkentin
Copy link

Similarly it would be nice to have a native way to interact with the new AWS Secrets Manager.

@mlcloudsec
Copy link

++++1

@wotek
Copy link

wotek commented Jul 3, 2018

+1

@rorofino
Copy link

+++1

@aavileli
Copy link

aavileli commented Aug 9, 2018

ECS needs to have support of SSM parameter reference in task definition. So many ugly hacks. Including issues with rate limiting when there are many containers booting up and calling a path namespace that could have 8 more keys.

@dossas95
Copy link

+1

7 similar comments
@BalmungSan
Copy link

BalmungSan commented Aug 21, 2018

+1

@danielpater
Copy link

+1

@arjen-ag5
Copy link

+1

@hridyeshpant
Copy link

+1

@dancallan
Copy link

+1

@AndreyMarchuk
Copy link

+1

@ddriddle
Copy link

ddriddle commented Sep 7, 2018

+1

@brianshepanek
Copy link

+1

@adnxn
Copy link
Contributor

adnxn commented Oct 25, 2018

update: the prs for ssm parameter store integration via enviornment variables has been merged into our dev branch! will update this issue when once the feature is released.

@mohsinhijazee
Copy link

Instead of having each variable specified individually, would be nice to mention the namespace/path and all the variables falling under that namespace should be injected. With one variable per entry - the task definitions are to be updated each time a new variable is added, removed or renamed.

Sure, this seems lousy with the risk that by mistake, the container might end up with huge number of parameters but flexibility comes with responsibility.

Thoughts?

@adnxn
Copy link
Contributor

adnxn commented Oct 26, 2018

would be nice to mention the namespace/path and all the variables falling under that namespace should be injected

@mohsinhijazee: we'll track this idea internally and update if we decide to move forward with it (or you'll see a pr i'm sure).

hm but otherwise - do you think you could write a detailed proposal of how you think this should work and what the failure modes would look like. its always great to have direct input from users and is very helpful for working through our design process. we have some examples of proposals here. tbh we're still figuring out how to best empower users to contribute to design of features, but this feels like a good starting point.

let me know if you have any questions, or even feel free to open a new issue to track this use case specifically.

@petderek
Copy link
Contributor

We are tracking secrets support for Fargate internally. I can't share dates at this time -- but I'll update this thread when changes are live.

@yumex93
Copy link
Contributor

yumex93 commented Nov 27, 2018

@CUBiKS We are tracking cf support for secrets internally. Once we get to know when it will be released, I will update the thread.

@stguitar
Copy link

stguitar commented Dec 4, 2018

@yumex93 Any updates since re:Invent about the ability to use something like "ValueFrom" as opposed to "Value" in the CF Environment object of key value pairs? I am specifically interested in the EC2 launch type.

I am guessing the only option otherwise is to use something like mentioned above with an entry point script.

@pmyjavec
Copy link

pmyjavec commented Dec 6, 2018

What isn't clear to me is whether or not I can make API calls to Parameter Store or SSM to retrieve secrets from inside a process running in a Fargate based container?

Until the Fargate platform supports SSM exported environment varialbes, I would be happy to make API calls from with inside my application code, is this a possible workaround in the meantime ?

The reason I ask is because I tried calling SSM today from inside fargate and wound up with errors about AWS_CONTAINER_CREDENTIALS_RELATIVE_URI being empty? @petderek, @samuelkarp are we stuck with nothing for now?

@pmyjavec
Copy link

pmyjavec commented Dec 6, 2018

Actually after a bunch of reading, I realized that if setting taskRoleARN in the task definition (including Fargate) the SDK is able to access SSM or Parameter Store APIs as required.

Sorry for the noise.

@stguitar
Copy link

stguitar commented Dec 6, 2018

@pmyjavec cool stuff...

However, I want to be clear that this is tangential to the actual feature request here. Fargate or not, I am aware that you can manually define environment variables for an ECS task to look up values from the SSM parameter store by using the "valueFrom" parameter flag provided in the console UI. The gap is that there is no way to do this outside of the console UI, such as in a cloud formation template.

I am currently exploring utilizing some sort of entrypoint wrapper solution mentioned above, but I have some JVM based solutions, and some node solutions, so I really dont have a common 'environment' that could be written to execute these strategies.

@yumex93 @petderek @samuelkarp - am I wrong here?

@samuelkarp
Copy link
Contributor

@stguitar You should be able to make API calls to AWS services (including SSM Parameter Store) from within Fargate containers. You will need to ensure that you've specified an IAM role for your task.

If you're having issues using IAM roles, you can open an issue or a support case.

@stguitar
Copy link

stguitar commented Dec 6, 2018

@samuelkarp Thanks for the tip! You are referring to using something like an entrypoint to fetch those upon container start correct? In other words, there still is no direct way like issue described at the top of this thread to specify the path in the env variable key pairing directly at this point?

@samuelkarp
Copy link
Contributor

@stguitar Correct, this feature has not been released for Fargate yet.

@stguitar
Copy link

stguitar commented Dec 6, 2018

@samuelkarp I am actually not using Fargate :D I think there is some cross talk on this thread to make it hard to follow (at least in my opinion).

are you saying this IS supported for EC2 launch types?

@samuelkarp
Copy link
Contributor

samuelkarp commented Dec 6, 2018

My apologies. You can reference values in SSM Parameter Store in environment variables for the EC2 launch type today; documentation is available here. While we do not have support for this in CloudFormation yet, you can use the AWS Console, the AWS CLI, or the AWS SDK to create task definitions that use this feature.

@stguitar
Copy link

stguitar commented Dec 6, 2018

@samuelkarp cool link, so maybe im getting too excited here... So this link you shared has a snippet for a task definition

"containerDefinitions": [
    {
        "secrets": [
            {
                "name": "environment_variable_name",
                "valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
            }
        ]
    }
]

Is this not a CloudFormation snippet? This looks like its injecting parameter store values into the environment for the container?

If this is the case, christmas came early.

@greenciu
Copy link

greenciu commented Dec 6, 2018

@stguitar - that is the json representation of a task-definition.
ECS console allows you to "configure via json". CloudFormation support seems is still to come.

@stguitar
Copy link

stguitar commented Dec 6, 2018

@CUBiKS AH... well, rats!

Ok, cool stuff.

Thanks to everyone for clarifying this for me, and I look forward to the CloudFormation support. Is this the right place to keep up to date on that, and do you have any kind of timeline before I hack too much on getting an alternative solution in place?

@sorjef
Copy link

sorjef commented Dec 10, 2018

While the support for Fargate is on the way, can we at least have an error while creating a task definition with secrets field and "requiredCompatibilities" = ["FARGATE"].

I've just spent a lot of time trying to figure out why my task could not be started. The error The specified platform does not satisfy the task definition’s required capabilities. was not informative at all. I believe error on task definition creation rather than on task instance creation would have saved me a lot of time.

Also, adding that this is not supported for Fargate to the doc for secrets field would be great.

@bjornbos
Copy link

bjornbos commented Jan 3, 2019

While the support for Fargate is on the way, can we at least have an error while creating a task definition with secrets field and "requiredCompatibilities" = ["FARGATE"].

I've just spent a lot of time trying to figure out why my task could not be started. The error The specified platform does not satisfy the task definition’s required capabilities. was not informative at all. I believe error on task definition creation rather than on task instance creation would have saved me a lot of time.

Also, adding that this is not supported for Fargate to the doc for secrets field would be great.

This issue has been closed with the announcement of Fargate Platform Version 1.3 (https://aws.amazon.com/about-aws/whats-new/2018/12/aws-fargate-platform-version-1-3-adds-secrets-support/), but unfortunately this error still occurs when trying to create a service in the management console that uses tasks with secrets.

@kueben
Copy link

kueben commented Jan 10, 2019

EDIT:
for others who ran into this, it seems secrets manager is not currently supported, whereas SSM parameter store is.

@bjornbos Is there a work around for this? I'm define the secrets in my ecs-params.yml with launch type FARGATE and deploying the service with ecs-cli

@bjornbos
Copy link

@kueben Actually, you can use the secrets manager, but it requires a special trick as described in https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html

So I created a secret in the secret manager with name a/b/c and gave that a plain text value, let's say "secretvalue". Then, in your Fargate json you add the secret in the following way:

"secrets": [{
   "name": "environment_variable_name",
   "valueFrom": "arn:aws:ssm:{{region}}:{{account_id}}:parameter/aws/reference/secretsmanager/a/b/c"
}]

Now it works :)

@steverecio
Copy link

@bjornbos I have this set up with a fargate task and there's no errors thrown but accessing the environment variable environment_variable_name returns the ARN string, not the actual parameter store value. Is there a specific role or policy I need to set on the task to ensure that the value is rendered instead of the ARN?

@bjornbos
Copy link

bjornbos commented Jan 26, 2019

@steverecio Make sure you use "valueFrom" and not "value". I have the following IAM policy attached to the executor role

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:GetParameters", "secretsmanager:GetSecretValue", "kms:Decrypt" ], "Resource": [ "*" ] } ] }

@steverecio
Copy link

Hmm I have those IAM policies attached and I'm using valueFrom in my task definition secrets array. Is this different from using SecureString values in Parameter Store?

I have the same settings but from the reference I see an ARN arn:aws:secretsmanager:us-west-1:123456789:secret:s1-secret-E18LRP but my ARN is the following: arn:aws:ssm:us-east-1:{{account_id}}:parameter/production/database/url (note the different prefix arn:aws:secretsmanager vs arn:aws:ssm).

For reference I'm creating my values using terraform aws_ssm_parameter. Maybe I should be using secretsmanager_secret_version 🤔

@bjornbos
Copy link

bjornbos commented Jan 26, 2019

@steverecio Your reference is incorrect, should be using arn:aws:ssm:us-east-1:{{account_id}}:parameter/aws/reference/secretsmanager/production/database/url

@netmanuy
Copy link

I can read without problems from Secret Manager using ssm but If I try to read directly from SecretManager using

        "secrets": [
            {
                "name": "mysecret",
                "valueFrom": "arn:aws:secretsmanager:us-east-1:123456:secret:mysecret-AbCdEf"
            }

My tasks can't be executed and I got the following message:

The specified platform does not satisfy the task definition’s required capabilities.

I'm using Fargate version 1.3.0, any ideas ?

@pspanchal
Copy link

I am new to the ParamStore and ECS. It seems to me that the new task definitions can import the secrets as Environment variables as long as the task definition is static

"secrets": [
{
"name": "mysecret",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456:secret:mysecret-AbCdEf"
}

This is fine if the Task needs to get a secret of a service account.
But what if the Task needs to get the secret of the actual user?

Can TaskDefintion accept either User variables or environment variables??

"secrets": [
{
"name": "mysecret",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456:secret:{user-variable}/{environment-variable}-AbCdEf"
}

@hendrixroa
Copy link

@bjornbos works fine for ssm also. Thanks

@Bramzor
Copy link

Bramzor commented May 2, 2019

No progress to get this functionality in CloudFormation?

Link towards CloudFormation request: aws/containers-roadmap#97 (comment)

@sachdeva-vivek
Copy link

I am also waiting for this functionality to be available in Cloudformation.

@asztal
Copy link

asztal commented May 20, 2019

It's "coming soon", follow that issue for updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests