-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fargate: support launching tasks in offline subnets #48
Comments
Hi @copumpkin, using Fargate in an isolated VPC is supported today. If you are seeing "DockerTimeoutError" and you have logging enabled, that is likely caused by the awslogs logging driver not being able to reach CloudWatch Logs. CloudWatch Logs supports PrivateLink, which will enable you to have an isolated VPC but still get container logs in Fargate. |
@clareliguori wow, that's awesome. Indeed the culprit was Relatedly, can the Either way, thanks for getting me unstuck! |
Yep agreed this error message |
Thanks! And on the |
You can run Fargate tasks in completely isolated subnet. With regards to Cloudwatch endpoint issues, If any, You would need to "Enable Private DNS Name" for com.amazonaws.eu-west-1.logs. ( By default "Private DNS" is not enabled for Cloudwatch endpoint) Go to the Cloudwatch Endpoint in VPC console > Actions > Modify Private DNS name. Also, make sure you're using the latest Fargate platform Version(1.3.0). Now that ECR endpoints are supported as well, everything(ECR/Cloudwatch) works like a breeze. |
Which service(s) is this request for?
Fargate
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
For various security/regulatory reasons, we often need to do work in completely "offline" VPCs with no internet connectivity. This means no public IPs, no NATs, no IGWs, or any other path to the internet. Fargate will allow me to try to submit a task to a subnet that can't reach the internet, but then will time out (not a great user experience!), presumably because it can't communicate its status back to the ECS control plane.
Are you currently working around this issue?
Not using Fargate for this sort of work 😦
Additional context
This could be in part connected to #1 because the common case is that someone will want to launch images from ECR into Fargate on a private subnet, but even independently of #1 (e.g., we run a private Docker registry) Fargate containers would ideally maintain some sort of separate pathway to the ECS control plane so that customer-level networking requirements don't prevent the basic service from working (or logging, or reporting statuses).
Somewhat interestingly, if I launch a Fargate task into a private subnet today, I do actually get error messages in the ECS control plane up until the container actually launches. So for example, if I point my task definition at my private registry and put in a bad image ID, the ECS control plane will pass along the Docker error message properly. But once it gets done with early provisioning activity and actually tries to launch the Fargate container, it just sits in the "waiting" state and eventually times out.
The text was updated successfully, but these errors were encountered: