-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetching secrets from S3 and giving them to a task #328
Comments
@jrydberg We announced support for IAM Roles for ECS Tasks on 07/14/2016. The documentation here illustrates how you can leverage that to vend secrets stored in s3 buckets to containers in a task. Please let us know if that solves your use-case. Thanks! |
+1 |
I think what the OP is requesting is that feature built into ECS, removing the need for S3-specific logic in the container build to read secrets. I'm doing this as well -- I've got an "entrypoint.sh" script that downloads secrets from S3 and injects them into the environment and/or passes them as options to the service I'm running. It would be great if ECS had it's own secrets management system, similar to other orchestration tools like Kubernetes, Mesos, Docker Enterprise, etc. This would help make containers more portable (which may run contrary to AWS business cases, but is the intent of containerization in general) and remove knowledge of how to obtain secrets from the containers themselves. +1 |
@mrburrito Thank you for your feedback. We have recorded this as a feature request. The [Secure String Parameters] APIs should also provide you with an alternative, as they obfuscate the need for storing secrets in an S3 bucket and fetching them in an entry point script. Instead, you just do The Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks blog shows how you can integrate this with IAM Roles for Tasks as well. |
I've raised a similar feature request around using SSM parameter store as a first class store of secrets in a similar style to Kubernetes: #1209 |
+1 |
SSM parameter store secrets is great if the containerized application supports environment variables. 👍 But many applications require a config file or set of files. Running these applications on ecs currently requires building a custom container image which can load those files from s3, or similar. Cooking custom images for each application isn't a great developer experience, and creates additional operational burden. It would be wonderful is ecs had something like config maps, or could load volumes/files from s3 or ebs similar to loading env from ssm. I'd really like something in the realm of: {
"volumes": [
// unsure of syntax, options include:
// download every object under this prefix into a volume
{"name": "config", "s3": {"uri":"s3://config.example.com/redis/"}}
{"name": "config", "s3": {"bucket":"config.example.com", "prefix":"/redis/"}}
// or individual objects
{"name": "config", "s3": {"uri":"s3://config.example.com/redis.conf"}}
{"name": "config", "s3": {"bucket":"config.example.com", "key":"/redis/redis.conf"}}
// maybe many
{"name": "config", "s3": {"bucket":"config.example.com", "keys":["/redis/redis.conf", "/redis/cert.pem"]}}
// maybe archives, a la Dockerfile ADD and layer/volume exports
{"name": "config", "s3": {"uri":"s3://config.example.com/redis.tar.gz"}}
],
"containerDefinitions": [
{
...,
"mountPoints": [
{"sourceVolume": "config", "containerPath": "/etc/redis/"}
]
}
]
} Update: to clarify, I don't think these volumes should write back to s3, just initialise an ephemeral volume from s3. I'm mostly wanting read-only volumes which are loaded from s3 at task start, and I'm specifically interested in this capability on fargate. |
closing issue, this functionality is supported by the generic secrets feature. feel free to reopen if this feature is specific to s3 and not just an implementation detail. @sj26: mind opening a new issue for the config maps request? or even better if you're willing to open PR with a more detailed proposal that we can evaluate. let me know if you're interested in pursuing that, we can discuss more details. |
Sorry I didn't see that request @adnxn, and we do still have this requirement, so I've just done so: https://github.com/aws/amazon-ecs-agent/issues/2891 |
Feature request (or discussion of feature):
Getting sensitive information to your program is always a tricky problem. Today you either have to store them in the artefact, in the task definition or fetch them at runtime from e.g. S3.
The first two isn't really safe. The latter is not very transparent to the application and requires special logic.
I propose that we expand the task definition and the agent with the following functionality:
Decrypt
function.if (2) fails, the task fails.
it will be up to the user to make sure the role can access S3 and access to use the KMS key.
Downside of this is that the contents will be limited to 4kb, since that's what KMS can handle. One could imagine using a data key.
The text was updated successfully, but these errors were encountered: