-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] allow asset bundling on docker remote host / docker in docker #8799
Comments
I'm also struggling this problem with using aws-lambda-nodejs in a VSCode Remote Container environment. Project workspace is already mounted... I wish the good old non-docker implementation comes back for compatibility. |
Copy @jogold The plot thickens... (which is a good thing!) |
@alekitto thanks for opening this issue. Can you detail the exact problem you're seeing?
It currently runs without issues in CodeBuild which is a docker-in-docker setup. Can you elaborate on the Gitlab CI setup maybe? Is your solution something along those lines? $ docker volume create asset-input
$ docker volume create asset-output
$ docker run -v asset-input:/asset-input -v asset-output:/asset-output --name helper busybox
$ docker cp <asset source> helper:/asset-input
$ docker run --rm -v asset-input:/asset-input -v asset-output/asset-output <user command>
$ docker cp helper:/asset-output <staged bundling dir>
$ docker rm helper
$ docker volume rm asset-input asset-output For the |
I'm trying to build compile a lambda from typescript down to js to be executed on node_10.x runtime.
The build container is a
Yes, with auto-generated id appended to volume names to avoid collisions.
IIRC the |
Could be related to #8544? |
UPDATE: my current workaround (valid with docker executor on gitlab-runner only) is to configure gitlab-runner to share the |
@alekitto This has been fixed in #8601 and released in v1.46.0, can you try with a version >= 1.46.0? |
I tried with 1.47 from local, but i cannot make it working when using docker-machine (it throws |
CDK or inside the container?
which path? |
The bundling container, when executing
It tries to mount |
Not sure I'm following here... you can detail this? |
I'm not currently executing the docker engine on my local machine. Then I try to run The command built by cdk is The problem is that Inspecting the docker machine via SSH I can see the all the folder structure, but no file is present, because nothing has been copied to the docker host from my local computer. |
Was bitten by this issue today when trying to use aws-cdk in a VSCode Remote Container environment in Windows 10. Bundling assets on Docker does not work on Docker in Docker environments running on WSL2. Edit: Found about ILocalBundling while reading this blog. Worked fine, I guess that's the best solution for people using VS Code + Remote Containers, so any Docker in Docker problem is avoided altogether. |
I hit this issue trying to run cdk in docker to with python lambda function: https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda_python/PythonFunction.html I've posted my toy example here: https://github.com/katrinabrock/aws-cdk-dind-py-lambda Fails with I am running docker engine locally on OS X. Full error message
|
@katrinabrock I have the same exact error that you posted when trying to deploy using this github action: https://github.com/youyo/aws-cdk-github-actions Were you able to find a fix? |
@pkasravi yes! I solved it by making I'm not sure if this is possible with github actions. |
As mentioned in #21506 I've had similar issues before in other CICD setups. The solution we used there was to use the I do have prepared a WIP change that I'm not sure if it would be enough to make the parameter accessible at main...webratz:aws-cdk:master |
This issue has received a significant amount of attention so we are automatically upgrading its priority. A member of the community will see the re-prioritization and provide an update on the issue. |
#22829) relates to #8799 follow up to stale #21660 ## Describe the feature Ability to add [--volumes-from](https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from) flag when bundling assets with docker. This enabled people using Docker in Docker to use CDKs bundling functionality, which is currently not possible. ## Use Case CICD systems often run within a docker container already. Many systems mount the ` /var/run/docker.sock` from the host system into the CICD container. When running bundling within such a container it currently breaks, as docker assume the path is from the host system, not within the CICD container. The options allows to mount the data from any other container. Very often it will be the current one which can be used by using the `HOSTNAME` environment variable ## Proposed Solution Add optional property to [DockerRunOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DockerRunOptions.html) and [BundlingOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.BundlingOptions.html) that would translate into --volumes-from {user provided option} This change would not reflect in any CloudFormation changes, but only with the docker commands performed when bundling. Due to using the `--volumes-from` option, docker will instead of trying to find the path on the host (where it does not exist) try to use the volume that is created by the container C1 that is actually running the CDK. With that it is able to access the files from CDK and can continue the build. ![Docker volumes from](https://user-images.githubusercontent.com/2162832/193787498-de03c66c-7bce-458b-9776-7ba421b9d929.jpg) The following plain docker steps show how this works from the docker side, and why we need to adjust the `--volumes-from` parameter. ```sh docker volume create builds docker run -v /var/run/docker.sock:/var/run/docker.sock -v builds:/builds -it docker ``` Now within the just created docker container, run the following commands. ```sh echo "testfile" > /builds/my-share-file.txt docker run --rm --name DinDContainer --volumes-from="${HOSTNAME}" ubuntu bash -c "ls -hla /builds" ``` We see that the second container C2 (here `DinDContainer`) has the same files available as the container C1. ## Alternative solutions I'm not aware of alternative solutions for this docker in docker use cases, besides of not relying on docker at all, which is out of scope for this MR. ---- ### All Submissions: * [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) ### Adding new Unconventional Dependencies: * [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies) ### New Features * [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)? * [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)? I ran it, but it seems not to have generated something, i might need some guidance there. *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Following up with the changes mentioned above: mounting docker volume works, and everything is there in the correct path. This still does not work, as the bundler creates bind mounts, which are always referring to a path on the host, and not on the others containers. Looking at previous comments and other issues I think the initial approach that @alekitto suggested, as an alternative to the current bind mount approach would make sense. It could look from a proccess like this:
This should not replace the current variant with bind mounts, but be an optional variant |
This is not anywhere close to code that could be published, but its a proof of concept that shows that this approach generally seems to work: There is a few issues with that like that the docker Maybe it can be discussed how an actual solution should be structured. Also I'm running out of time to work on this currently, so not sure if and when i could continue |
Thanks @webratz Just one thing though: dockerExec(['cp', `${this.sourcePath}/.`, `${copyContainerName}:${AssetStaging.BUNDLING_INPUT_DIR}`]); as if (citing the doc) This would avoid the additional |
thanks for the hint. i updated my branch. |
aws#22829) relates to aws#8799 follow up to stale aws#21660 ## Describe the feature Ability to add [--volumes-from](https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from) flag when bundling assets with docker. This enabled people using Docker in Docker to use CDKs bundling functionality, which is currently not possible. ## Use Case CICD systems often run within a docker container already. Many systems mount the ` /var/run/docker.sock` from the host system into the CICD container. When running bundling within such a container it currently breaks, as docker assume the path is from the host system, not within the CICD container. The options allows to mount the data from any other container. Very often it will be the current one which can be used by using the `HOSTNAME` environment variable ## Proposed Solution Add optional property to [DockerRunOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DockerRunOptions.html) and [BundlingOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.BundlingOptions.html) that would translate into --volumes-from {user provided option} This change would not reflect in any CloudFormation changes, but only with the docker commands performed when bundling. Due to using the `--volumes-from` option, docker will instead of trying to find the path on the host (where it does not exist) try to use the volume that is created by the container C1 that is actually running the CDK. With that it is able to access the files from CDK and can continue the build. ![Docker volumes from](https://user-images.githubusercontent.com/2162832/193787498-de03c66c-7bce-458b-9776-7ba421b9d929.jpg) The following plain docker steps show how this works from the docker side, and why we need to adjust the `--volumes-from` parameter. ```sh docker volume create builds docker run -v /var/run/docker.sock:/var/run/docker.sock -v builds:/builds -it docker ``` Now within the just created docker container, run the following commands. ```sh echo "testfile" > /builds/my-share-file.txt docker run --rm --name DinDContainer --volumes-from="${HOSTNAME}" ubuntu bash -c "ls -hla /builds" ``` We see that the second container C2 (here `DinDContainer`) has the same files available as the container C1. ## Alternative solutions I'm not aware of alternative solutions for this docker in docker use cases, besides of not relying on docker at all, which is out of scope for this MR. ---- ### All Submissions: * [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) ### Adding new Unconventional Dependencies: * [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies) ### New Features * [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)? * [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)? I ran it, but it seems not to have generated something, i might need some guidance there. *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…cker (#23576) Fixes #8799 This implements an alternative variant on how to put get files into bundling containers. This is more flexible in its use cases for complex Docker setup scenarios but more complex and slower. Therefore it is not enabled as a default, but as an additional option. For details to the approach please refer to the linked issue. ---- ### All Submissions: * [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) ### Adding new Construct Runtime Dependencies: * [ ] This PR adds new construct runtime dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-construct-runtime-dependencies) ### New Features * [X] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)? * [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)? *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
aws#22829) relates to aws#8799 follow up to stale aws#21660 ## Describe the feature Ability to add [--volumes-from](https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from) flag when bundling assets with docker. This enabled people using Docker in Docker to use CDKs bundling functionality, which is currently not possible. ## Use Case CICD systems often run within a docker container already. Many systems mount the ` /var/run/docker.sock` from the host system into the CICD container. When running bundling within such a container it currently breaks, as docker assume the path is from the host system, not within the CICD container. The options allows to mount the data from any other container. Very often it will be the current one which can be used by using the `HOSTNAME` environment variable ## Proposed Solution Add optional property to [DockerRunOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DockerRunOptions.html) and [BundlingOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.BundlingOptions.html) that would translate into --volumes-from {user provided option} This change would not reflect in any CloudFormation changes, but only with the docker commands performed when bundling. Due to using the `--volumes-from` option, docker will instead of trying to find the path on the host (where it does not exist) try to use the volume that is created by the container C1 that is actually running the CDK. With that it is able to access the files from CDK and can continue the build. ![Docker volumes from](https://user-images.githubusercontent.com/2162832/193787498-de03c66c-7bce-458b-9776-7ba421b9d929.jpg) The following plain docker steps show how this works from the docker side, and why we need to adjust the `--volumes-from` parameter. ```sh docker volume create builds docker run -v /var/run/docker.sock:/var/run/docker.sock -v builds:/builds -it docker ``` Now within the just created docker container, run the following commands. ```sh echo "testfile" > /builds/my-share-file.txt docker run --rm --name DinDContainer --volumes-from="${HOSTNAME}" ubuntu bash -c "ls -hla /builds" ``` We see that the second container C2 (here `DinDContainer`) has the same files available as the container C1. ## Alternative solutions I'm not aware of alternative solutions for this docker in docker use cases, besides of not relying on docker at all, which is out of scope for this MR. ---- ### All Submissions: * [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) ### Adding new Unconventional Dependencies: * [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies) ### New Features * [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)? * [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)? I ran it, but it seems not to have generated something, i might need some guidance there. *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
… in workflows BREAKING CHANGE: default to node16 To use any other Node version, explicitly provide the desired version number BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows Using `jsii/superchain` provides no tangible benefit over installing dependencies with GitHub Actions. However because AWS CDK Constructs often require to run docker commands, with image GitHub Action workflows end up attempting to run Docker in Docker. This is not trivial to achieve (see #2094 & aws/aws-cdk#8799). Additionally the existing build and package workflows had an inconsistent usage of the image, causing further problems. To restore the old behavior, set `options.workflowContainerImage` to the desired image. Fixes #2094 Closes #1065
… in workflows BREAKING CHANGE: default to node16 To use any other Node version, explicitly provide the desired version number BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows Using `jsii/superchain` provides no tangible benefit over installing dependencies with GitHub Actions. However because AWS CDK Constructs often require to run docker commands, with image GitHub Action workflows end up attempting to run Docker in Docker. This is not trivial to achieve (see #2094 & aws/aws-cdk#8799). Additionally the existing build and package workflows had an inconsistent usage of the image, causing further problems. To restore the old behavior, set `options.workflowContainerImage` to the desired image. Fixes #2094 Closes #1065
… in workflows BREAKING CHANGE: default to node16 To use any other Node version, explicitly provide the desired version number BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows Using `jsii/superchain` provides no tangible benefit over installing dependencies with GitHub Actions. However because AWS CDK Constructs often require to run docker commands, with image GitHub Action workflows end up attempting to run Docker in Docker. This is not trivial to achieve (see #2094 & aws/aws-cdk#8799). Additionally the existing build and package workflows had an inconsistent usage of the image, causing further problems. To restore the old behavior, set `options.workflowContainerImage` to the desired image. Fixes #2094 Closes #1065
…flows (#2510) BREAKING CHANGE: default to node16 To use any other Node version, explicitly provide the desired version number BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows Using the `jsii/superchain` image provides no tangible benefit over installing dependencies with GitHub Actions. However AWS CDK Constructs often require to run docker commands, and thus using the image forces GitHub Action to exectute commands as Docker in Docker. This is does not work in many situations and generic fixes are not reliable (see #2094 & aws/aws-cdk#8799). Additionally the existing build and package workflows had an inconsistent usage of the image, causing more problems. To restore the old behavior, set `options.workflowContainerImage` to the desired image. Fixes #2094 Closes #1065 --- By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Core
AssetStaging
is now exposing abundling
option key allowing assets to be built before being uploaded through a docker image.Unfortunately the docker command is now using volume mount flags to mount input and output folder, making it impossible to execute if docker is set to run on a remote host or in a docker-in-docker env.
Use Case
I encountered this problem trying to build a couple of lambdas (and a custom resource provider) written in typescript on a Gitlab CI instance with docker-in-docker or executing docker commands via docker-machine.
Proposed Solution
My proposal is to create two temporary volumes on target docker env, one for inputs and one for outputs.
Then the first volume can be filled running a
busybox
image as helper and callingdocker cp
to fill the volume.Once the cp has finished the helper is stopped and the build command is invoked.
After that, if exit code is 0, another helper is started and the
docker cp
is used to copy outputs back to the cdk env.This is a 🚀 Feature Request
The text was updated successfully, but these errors were encountered: