Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] allow asset bundling on docker remote host / docker in docker #8799

Closed
1 of 2 tasks
alekitto opened this issue Jun 29, 2020 · 31 comments · Fixed by #23576
Closed
1 of 2 tasks

[core] allow asset bundling on docker remote host / docker in docker #8799

alekitto opened this issue Jun 29, 2020 · 31 comments · Fixed by #23576
Assignees
Labels
@aws-cdk/core Related to core CDK functionality effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p1

Comments

@alekitto
Copy link
Contributor

Core AssetStaging is now exposing a bundling option key allowing assets to be built before being uploaded through a docker image.
Unfortunately the docker command is now using volume mount flags to mount input and output folder, making it impossible to execute if docker is set to run on a remote host or in a docker-in-docker env.

Use Case

I encountered this problem trying to build a couple of lambdas (and a custom resource provider) written in typescript on a Gitlab CI instance with docker-in-docker or executing docker commands via docker-machine.

Proposed Solution

My proposal is to create two temporary volumes on target docker env, one for inputs and one for outputs.
Then the first volume can be filled running a busybox image as helper and calling docker cp to fill the volume.
Once the cp has finished the helper is stopped and the build command is invoked.
After that, if exit code is 0, another helper is started and the docker cp is used to copy outputs back to the cdk env.

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@alekitto alekitto added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Jun 29, 2020
@github-actions github-actions bot added the @aws-cdk/core Related to core CDK functionality label Jun 29, 2020
@rinfield
Copy link

rinfield commented Jun 30, 2020

I'm also struggling this problem with using aws-lambda-nodejs in a VSCode Remote Container environment.
(/var/run/docker.sock sharing)

Project workspace is already mounted...

I wish the good old non-docker implementation comes back for compatibility.

@eladb
Copy link
Contributor

eladb commented Jun 30, 2020

Copy @jogold

The plot thickens... (which is a good thing!)

@jogold
Copy link
Contributor

jogold commented Jun 30, 2020

@alekitto thanks for opening this issue.

Can you detail the exact problem you're seeing?

making it impossible to execute if docker is set to run on a remote host or in a docker-in-docker env.

It currently runs without issues in CodeBuild which is a docker-in-docker setup. Can you elaborate on the Gitlab CI setup maybe?

Is your solution something along those lines?

$ docker volume create asset-input
$ docker volume create asset-output
$ docker run -v asset-input:/asset-input -v asset-output:/asset-output --name helper busybox
$ docker cp <asset source> helper:/asset-input
$ docker run --rm -v asset-input:/asset-input -v asset-output/asset-output <user command>
$ docker cp helper:/asset-output <staged bundling dir>
$ docker rm helper
$ docker volume rm asset-input asset-output

For the @aws-lamda-nodejs.NodejsFunction we currently mount the projet root as the asset source, for a large repo (or even monorepo) this represents lots of files, how does this affect the docker cp?

@alekitto
Copy link
Contributor Author

Can you detail the exact problem you're seeing?

I'm trying to build compile a lambda from typescript down to js to be executed on node_10.x runtime.
A docker:dind service container is run and correcly responds to tcp://docker:2375, then the build starts.
When executing cdk diff, I can see that another node container has been pulled and run by cdk with the arguments set in code bundlingOptions.
For a coincidence, Gitlab CI mounts the build folder at the same path in all the containers, so the build container and the docker container shares the same cdk code at the same path.
The dependencies are installed correcly, the compilation executes successfully, then an error is thrown stating that Bundling did not produce any output. Inspecting the docker:dind container I noticed that the temporary path created by cdk to be mounted as asset-output is created on both containers, but only the one on the docker container is populated with the compiled script.
That's because the -v docker option mounts maps on the container a volume (or a path) created on the docker host, not the one calling the docker cli.

It currently runs without issues in CodeBuild which is a docker-in-docker setup. Can you elaborate on the Gitlab CI setup maybe?

The build container is a node:14-buster image with the docker cli added. A service docker:dind image is run for the job, responding as docker hostname.

Is your solution something along those lines?

$ docker volume create asset-input
$ docker volume create asset-output
$ docker run -v asset-input:/asset-input -v asset-output:/asset-output --name helper busybox
$ docker cp <asset source> helper:/asset-input
$ docker run --rm -v asset-input:/asset-input -v asset-output/asset-output <user command>
$ docker cp helper:/asset-output <staged bundling dir>
$ docker rm helper
$ docker volume rm asset-input asset-output

Yes, with auto-generated id appended to volume names to avoid collisions.

For the @aws-lamda-nodejs.NodejsFunction we currently mount the projet root as the asset source, for a large repo (or even monorepo) this represents lots of files, how does this affect the docker cp?

IIRC the docker cp command builds a tar archive internally and streams it to the docker engine, but I don't know if the number of files can affect the performance significantly compared to the size of the files.

@SomayaB SomayaB removed the needs-triage This issue or PR still needs to be triaged. label Jun 30, 2020
@alekitto
Copy link
Contributor Author

Could be related to #8544?
Creating volumes for input and output can avoid osxfs performance issues on io intensive operations (skipping continuous syncs between macos filesystem and the virtual machine hosting docker).

@alekitto
Copy link
Contributor Author

alekitto commented Jul 1, 2020

UPDATE: my current workaround (valid with docker executor on gitlab-runner only) is to configure gitlab-runner to share the /tmp path mounting it as a volume across all the containers in the same build job.
This way the build (cdk) container and the docker:dind container share the same /tmp, allowing cdk to find the output files.

@jogold
Copy link
Contributor

jogold commented Jul 1, 2020

UPDATE: my current workaround (valid with docker executor on gitlab-runner only) is to configure gitlab-runner to share the /tmp path mounting it as a volume across all the containers in the same build job.
This way the build (cdk) container and the docker:dind container share the same /tmp, allowing cdk to find the output files.

@alekitto This has been fixed in #8601 and released in v1.46.0, can you try with a version >= 1.46.0?

@alekitto
Copy link
Contributor Author

alekitto commented Jul 1, 2020

@alekitto This has been fixed in #8601 and released in v1.46.0, can you try with a version >= 1.46.0?

I tried with 1.47 from local, but i cannot make it working when using docker-machine (it throws package.json not exists error), because the input files are on my computer and it tries to mount a non-existent path on the docker host.

@jogold
Copy link
Contributor

jogold commented Jul 1, 2020

it throws package.json not exists error

CDK or inside the container?

it tries to mount a non-existent path

which path?

@alekitto
Copy link
Contributor Author

alekitto commented Jul 1, 2020

CDK or inside the container?

The bundling container, when executing npm install.

which path?

It tries to mount /home/alekitto/my-project-folder/cdk/lib/construct/kms/lambda which is the path of the code to be compiled on my local machine, currently non-existent in a newly created docker-machine where the docker engine is hosted.

@jogold
Copy link
Contributor

jogold commented Jul 1, 2020

currently non-existent in a newly created docker-machine where the docker engine is hosted.

Not sure I'm following here... you can detail this?

@alekitto
Copy link
Contributor Author

alekitto commented Jul 1, 2020

Not sure I'm following here... you can detail this?

I'm not currently executing the docker engine on my local machine.
After docker-machine create --driver amazonec2 docker-aws a ec2 instance is provisioned to run the docker engine, exposing the docker tcp port (2376).
After provisioning is finished I run eval $(docker-machine env docker-aws) to be able to run docker command on the newly created host.

Then I try to run cdk diff which invokes the docker cli (now pointing to the remote docker host).

The command built by cdk is docker run --rm -v /home/alekitto/my-project-folder/cdk/lib/construct/kms/lambda:/asset-input -v /home/alekitto/my-project-folder/cdk/.cdk.staging:/asset-output node:14-buster sh -c "npm install && npm run build && cp app.js /asset-input"

The problem is that /home/alekitto/my-project-folder/cdk exists on my local machine, but not on the docker host.
When launching that command, the docker host is instructed to mount the specified path on the host machine into the container, but on the host machine that path is non-existent.
The docker engine then creates all the folders, which are obviously empty, and mounts them into the new container.
When the container tries to execute npm install the command exits with error package.json does not exist, because the file is not present on the docker host at the specified path.

Inspecting the docker machine via SSH I can see the all the folder structure, but no file is present, because nothing has been copied to the docker host from my local computer.

@eladb eladb added the effort/small Small work item – less than a day of effort label Jul 16, 2020
@eladb eladb added the p2 label Aug 17, 2020
@eladb eladb assigned jogold and rix0rrr and unassigned eladb Aug 17, 2020
@eladb eladb assigned eladb and unassigned rix0rrr Oct 6, 2020
@CarlosDomingues
Copy link

CarlosDomingues commented Jan 12, 2021

Was bitten by this issue today when trying to use aws-cdk in a VSCode Remote Container environment in Windows 10. Bundling assets on Docker does not work on Docker in Docker environments running on WSL2.

Edit: Found about ILocalBundling while reading this blog. Worked fine, I guess that's the best solution for people using VS Code + Remote Containers, so any Docker in Docker problem is avoided altogether.

Example in Python.

@katrinabrock
Copy link

I hit this issue trying to run cdk in docker to with python lambda function: https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda_python/PythonFunction.html

I've posted my toy example here: https://github.com/katrinabrock/aws-cdk-dind-py-lambda

Fails with Bundling did not produce any output. Check that content is written to /asset-output. I'm not sure where I can see the full docker command that CDK is running to indicate exactly what is being to /asset-output. Traceback references /tmp. I tried mounting /tmp from my docker host to the container where I'm running CDK as @alekitto suggested, but that didn't solve it.

I am running docker engine locally on OS X.

Full error message

runner_1  | Bundling asset toy-cdk/MyPythonLambda/Code/Stage...
runner_1  | jsii.errors.JavaScriptError: 
runner_1  |   Error: Bundling did not produce any output. Check that content is written to /asset-output.
runner_1  |       at AssetStaging.bundle (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/core/lib/asset-staging.js:313:19)
runner_1  |       at AssetStaging.stageByBundling (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/core/lib/asset-staging.js:183:14)
runner_1  |       at stageThisAsset (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/core/lib/asset-staging.js:64:41)
runner_1  |       at Cache.obtain (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/core/lib/private/cache.js:28:17)
runner_1  |       at new AssetStaging (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/core/lib/asset-staging.js:88:48)
runner_1  |       at new Asset (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/aws-s3-assets/lib/asset.js:28:25)
runner_1  |       at AssetCode.bind (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/aws-lambda/lib/code.js:225:26)
runner_1  |       at new Function (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/aws-lambda/lib/function.js:95:33)
runner_1  |       at new PythonFunction (/tmp/jsii-kernel-tSWP3N/node_modules/@aws-cdk/aws-lambda-python/lib/function.js:34:9)
runner_1  |       at /tmp/tmp_fjzvna5/lib/program.js:2700:58
runner_1  | 
runner_1  | The above exception was the direct cause of the following exception:
runner_1  | 
runner_1  | Traceback (most recent call last):
runner_1  |   File "app.py", line 9, in <module>
runner_1  |     ToyCdkStack(app, "toy-cdk", env={'region': 'us-west-2'})
runner_1  |   File "/usr/local/lib/python3.6/dist-packages/jsii/_runtime.py", line 83, in __call__
runner_1  |     inst = super().__call__(*args, **kwargs)
runner_1  |   File "/opt/toy-cdk/toy_cdk/toy_cdk_stack.py", line 17, in __init__
runner_1  |     entry = '/opt/toy-cdk/lambda/'
runner_1  |   File "/usr/local/lib/python3.6/dist-packages/jsii/_runtime.py", line 83, in __call__
runner_1  |     inst = super().__call__(*args, **kwargs)
runner_1  |   File "/usr/local/lib/python3.6/dist-packages/aws_cdk/aws_lambda_python/__init__.py", line 243, in __init__
runner_1  |     jsii.create(PythonFunction, self, [scope, id, props])
runner_1  |   File "/usr/local/lib/python3.6/dist-packages/jsii/_kernel/__init__.py", line 272, in create
runner_1  |     for iface in getattr(klass, "__jsii_ifaces__", [])
runner_1  |   File "/usr/local/lib/python3.6/dist-packages/jsii/_kernel/providers/process.py", line 348, in create
runner_1  |     return self._process.send(request, CreateResponse)
runner_1  |   File "/usr/local/lib/python3.6/dist-packages/jsii/_kernel/providers/process.py", line 330, in send
runner_1  |     raise JSIIError(resp.error) from JavaScriptError(resp.stack)
runner_1  | jsii.errors.JSIIError: Bundling did not produce any output. Check that content is written to /asset-output.
runner_1  | Subprocess exited with error 1
docker_runner_1 exited with code 1

@pkasravi
Copy link

@katrinabrock I have the same exact error that you posted when trying to deploy using this github action: https://github.com/youyo/aws-cdk-github-actions

Were you able to find a fix?

@katrinabrock
Copy link

@pkasravi yes!

I solved it by making cdk.out a mounted volume with the exact same path within the docker container as it is on the host.

https://github.com/katrinabrock/aws-cdk-dind-py-lambda/blob/master/docker-compose.yml#L8

I'm not sure if this is possible with github actions.

@webratz
Copy link
Contributor

webratz commented Aug 10, 2022

As mentioned in #21506 I've had similar issues before in other CICD setups. The solution we used there was to use the --volumes-from=$HOSTNAME as a flag. With that docker will look in the volume of the current running container and should be able to find the path.

I do have prepared a WIP change that I'm not sure if it would be enough to make the parameter accessible at main...webratz:aws-cdk:master
Maybe someone with more insights can have a look if it makes sense to continue there and make it a proper PR.
I would probably need a bit help on the tests there, as it needs a specific constellation of docker to make that testable (with the mounted docker sockets as described above)

@github-actions
Copy link

This issue has received a significant amount of attention so we are automatically upgrading its priority. A member of the community will see the re-prioritization and provide an update on the issue.

@github-actions github-actions bot added p1 and removed p2 labels Oct 23, 2022
@mrgrain mrgrain assigned mrgrain and jogold and unassigned jogold Dec 8, 2022
@mrgrain mrgrain changed the title [core] allow asset bundling on docker remote host [core] allow asset bundling on docker remote host / docker in docker Dec 9, 2022
mergify bot pushed a commit that referenced this issue Dec 12, 2022
#22829)

relates to #8799 
follow up to stale #21660

## Describe the feature
Ability to add [--volumes-from](https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from) flag when bundling assets with docker.
This enabled people using Docker in Docker to use CDKs bundling functionality, which is currently not possible.

## Use Case
CICD systems often run within a docker container already. Many systems mount the ` /var/run/docker.sock` from the host system into the CICD container. When running bundling within such a container it currently breaks, as docker assume the path is from the host system, not within the CICD container.
The options allows to mount the data from any other container. Very often it will be the current one which can be used by using the `HOSTNAME` environment variable

## Proposed Solution
Add optional property to [DockerRunOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DockerRunOptions.html) and [BundlingOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.BundlingOptions.html) that would translate into --volumes-from {user provided option}

This change would not reflect in any CloudFormation changes, but only with the docker commands performed when bundling.

Due to using the `--volumes-from` option, docker will instead of trying to find the path on the host (where it does not exist) try to use the volume that is created by the container C1 that is actually running the CDK. With that it is able to access the files from CDK and can continue the build.

![Docker volumes from](https://user-images.githubusercontent.com/2162832/193787498-de03c66c-7bce-458b-9776-7ba421b9d929.jpg)

The following plain docker steps show how this works from the docker side, and why we need to adjust the `--volumes-from` parameter.

```sh
docker volume create builds
docker run -v /var/run/docker.sock:/var/run/docker.sock -v builds:/builds -it docker
```
Now within the just created docker container, run the following commands.

```sh
echo "testfile" > /builds/my-share-file.txt
docker run --rm --name DinDContainer --volumes-from="${HOSTNAME}" ubuntu bash -c "ls -hla /builds"
```
We see that the second container C2 (here `DinDContainer`) has the same files available as the container C1. 

## Alternative solutions

I'm not aware of alternative solutions for this docker in docker use cases, besides of not relying on docker at all, which is out of scope for this MR.

----

### All Submissions:

* [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?
I ran it, but it seems not to have generated something, i might need some guidance there.

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@webratz
Copy link
Contributor

webratz commented Dec 20, 2022

Following up with the changes mentioned above: mounting docker volume works, and everything is there in the correct path. This still does not work, as the bundler creates bind mounts, which are always referring to a path on the host, and not on the others containers.
So while with volumesFrom the actual volume is there, it won't be used as the bind mounts for /asset-input and /asset-output don't work.

Looking at previous comments and other issues I think the initial approach that @alekitto suggested, as an alternative to the current bind mount approach would make sense. It could look from a proccess like this:

  • Create temporary docker volumes
    • docker volume create asset-input
    • docker volume create asset-output
  • create helper container & copy files
    • docker run --name copy-helper -v asset-input:/asset-input -v asset-output:/asset-output alpine
    • docker cp foobar/* copy-helper:/asset-input
  • launch bundling as it is today, but mount the just created volumes instead of bind mount
  • copy files via helper container to local host
  • docker cp copy-helper:/asset-output my/cdk.out/.assethash
  • clean up
    • remove helper container: docker stop copy-helper
    • remove volumes: docker volume rm -f asset-input, docker volume rm -f asset-output

This should not replace the current variant with bind mounts, but be an optional variant

@webratz
Copy link
Contributor

webratz commented Dec 23, 2022

This is not anywhere close to code that could be published, but its a proof of concept that shows that this approach generally seems to work:
fabddf0

There is a few issues with that like that the docker cp will always copy the folder into the target, which currently would require an additional mv to the commands.

Maybe it can be discussed how an actual solution should be structured. Also I'm running out of time to work on this currently, so not sure if and when i could continue

@alekitto
Copy link
Contributor Author

Thanks @webratz
Your POC is very similar to the one I personally use in my projects (obviously I had to copy the dockerExec function).

Just one thing though:
Based on https://docs.docker.com/engine/reference/commandline/cp/
The first copy command should be

dockerExec(['cp', `${this.sourcePath}/.`, `${copyContainerName}:${AssetStaging.BUNDLING_INPUT_DIR}`]);

as if (citing the doc) SRC_PATH does end with /. (that is: slash followed by dot) the content of the source directory is copied into this (destination) directory

This would avoid the additional mv command.

@webratz
Copy link
Contributor

webratz commented Jan 3, 2023

thanks for the hint. i updated my branch.
with that it works just fine. now the tricky party will be how to properly integrate and test this in the core lib.
i started to dig a little bit into this, but none of the stubs etc can deal with anything else beside a docker run, and actively fail with the other commands that are needed here.

brennanho pushed a commit to brennanho/aws-cdk that referenced this issue Jan 20, 2023
aws#22829)

relates to aws#8799 
follow up to stale aws#21660

## Describe the feature
Ability to add [--volumes-from](https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from) flag when bundling assets with docker.
This enabled people using Docker in Docker to use CDKs bundling functionality, which is currently not possible.

## Use Case
CICD systems often run within a docker container already. Many systems mount the ` /var/run/docker.sock` from the host system into the CICD container. When running bundling within such a container it currently breaks, as docker assume the path is from the host system, not within the CICD container.
The options allows to mount the data from any other container. Very often it will be the current one which can be used by using the `HOSTNAME` environment variable

## Proposed Solution
Add optional property to [DockerRunOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DockerRunOptions.html) and [BundlingOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.BundlingOptions.html) that would translate into --volumes-from {user provided option}

This change would not reflect in any CloudFormation changes, but only with the docker commands performed when bundling.

Due to using the `--volumes-from` option, docker will instead of trying to find the path on the host (where it does not exist) try to use the volume that is created by the container C1 that is actually running the CDK. With that it is able to access the files from CDK and can continue the build.

![Docker volumes from](https://user-images.githubusercontent.com/2162832/193787498-de03c66c-7bce-458b-9776-7ba421b9d929.jpg)

The following plain docker steps show how this works from the docker side, and why we need to adjust the `--volumes-from` parameter.

```sh
docker volume create builds
docker run -v /var/run/docker.sock:/var/run/docker.sock -v builds:/builds -it docker
```
Now within the just created docker container, run the following commands.

```sh
echo "testfile" > /builds/my-share-file.txt
docker run --rm --name DinDContainer --volumes-from="${HOSTNAME}" ubuntu bash -c "ls -hla /builds"
```
We see that the second container C2 (here `DinDContainer`) has the same files available as the container C1. 

## Alternative solutions

I'm not aware of alternative solutions for this docker in docker use cases, besides of not relying on docker at all, which is out of scope for this MR.

----

### All Submissions:

* [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?
I ran it, but it seems not to have generated something, i might need some guidance there.

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@mergify mergify bot closed this as completed in #23576 Jan 27, 2023
mergify bot pushed a commit that referenced this issue Jan 27, 2023
…cker (#23576)

Fixes #8799 

This implements an alternative variant on how to put get files into bundling containers. This is more flexible in its use cases for complex Docker setup scenarios but more complex and slower. Therefore it is not enabled as a default, but as an additional option.

For details to the approach please refer to the linked issue.

----

### All Submissions:

* [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Construct Runtime Dependencies:

* [ ] This PR adds new construct runtime dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-construct-runtime-dependencies)

### New Features

* [X] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

brennanho pushed a commit to brennanho/aws-cdk that referenced this issue Feb 22, 2023
aws#22829)

relates to aws#8799 
follow up to stale aws#21660

## Describe the feature
Ability to add [--volumes-from](https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from) flag when bundling assets with docker.
This enabled people using Docker in Docker to use CDKs bundling functionality, which is currently not possible.

## Use Case
CICD systems often run within a docker container already. Many systems mount the ` /var/run/docker.sock` from the host system into the CICD container. When running bundling within such a container it currently breaks, as docker assume the path is from the host system, not within the CICD container.
The options allows to mount the data from any other container. Very often it will be the current one which can be used by using the `HOSTNAME` environment variable

## Proposed Solution
Add optional property to [DockerRunOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DockerRunOptions.html) and [BundlingOptions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.BundlingOptions.html) that would translate into --volumes-from {user provided option}

This change would not reflect in any CloudFormation changes, but only with the docker commands performed when bundling.

Due to using the `--volumes-from` option, docker will instead of trying to find the path on the host (where it does not exist) try to use the volume that is created by the container C1 that is actually running the CDK. With that it is able to access the files from CDK and can continue the build.

![Docker volumes from](https://user-images.githubusercontent.com/2162832/193787498-de03c66c-7bce-458b-9776-7ba421b9d929.jpg)

The following plain docker steps show how this works from the docker side, and why we need to adjust the `--volumes-from` parameter.

```sh
docker volume create builds
docker run -v /var/run/docker.sock:/var/run/docker.sock -v builds:/builds -it docker
```
Now within the just created docker container, run the following commands.

```sh
echo "testfile" > /builds/my-share-file.txt
docker run --rm --name DinDContainer --volumes-from="${HOSTNAME}" ubuntu bash -c "ls -hla /builds"
```
We see that the second container C2 (here `DinDContainer`) has the same files available as the container C1. 

## Alternative solutions

I'm not aware of alternative solutions for this docker in docker use cases, besides of not relying on docker at all, which is out of scope for this MR.

----

### All Submissions:

* [X] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [ ] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?
I ran it, but it seems not to have generated something, i might need some guidance there.

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
mrgrain added a commit to projen/projen that referenced this issue Mar 15, 2023
… in workflows

BREAKING CHANGE: default to node16
To use any other Node version, explicitly provide the desired version number

BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows
Using `jsii/superchain` provides no tangible benefit over installing dependencies with GitHub Actions.
However because AWS CDK Constructs often require to run docker commands, with image GitHub Action workflows end up attempting to run Docker in Docker.
This is not trivial to achieve (see #2094 & aws/aws-cdk#8799).
Additionally the existing build and package workflows had an inconsistent usage of the image, causing further problems.

To restore the old behavior, set `options.workflowContainerImage` to the desired image.

Fixes #2094
Closes #1065
mrgrain added a commit to projen/projen that referenced this issue Mar 15, 2023
… in workflows

BREAKING CHANGE: default to node16
To use any other Node version, explicitly provide the desired version number

BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows
Using `jsii/superchain` provides no tangible benefit over installing dependencies with GitHub Actions.
However because AWS CDK Constructs often require to run docker commands, with image GitHub Action workflows end up attempting to run Docker in Docker.
This is not trivial to achieve (see #2094 & aws/aws-cdk#8799).
Additionally the existing build and package workflows had an inconsistent usage of the image, causing further problems.

To restore the old behavior, set `options.workflowContainerImage` to the desired image.

Fixes #2094
Closes #1065
mrgrain added a commit to projen/projen that referenced this issue Mar 21, 2023
… in workflows

BREAKING CHANGE: default to node16
To use any other Node version, explicitly provide the desired version number

BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows
Using `jsii/superchain` provides no tangible benefit over installing dependencies with GitHub Actions.
However because AWS CDK Constructs often require to run docker commands, with image GitHub Action workflows end up attempting to run Docker in Docker.
This is not trivial to achieve (see #2094 & aws/aws-cdk#8799).
Additionally the existing build and package workflows had an inconsistent usage of the image, causing further problems.

To restore the old behavior, set `options.workflowContainerImage` to the desired image.

Fixes #2094
Closes #1065
mergify bot pushed a commit to projen/projen that referenced this issue Mar 21, 2023
…flows (#2510)

BREAKING CHANGE: default to node16
To use any other Node version, explicitly provide the desired version number

BREAKING CHANGE: remove `jsii/superchain` image from AwsCdkConstructLibrary workflows
Using the `jsii/superchain` image provides no tangible benefit over installing dependencies with GitHub Actions. However AWS CDK Constructs often require to run docker commands, and thus using the image forces GitHub Action to exectute commands as Docker in Docker. This is does not work in many situations and generic fixes are not reliable (see #2094 & aws/aws-cdk#8799). Additionally the existing build and package workflows had an inconsistent usage of the image, causing more problems.

To restore the old behavior, set `options.workflowContainerImage` to the desired image.

Fixes #2094
Closes #1065

---
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/core Related to core CDK functionality effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.