-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pipelines] 1.67.0 breaks custom asset bundling when using multiple pipelines #10877
Comments
@danielfrankcom can you share the exact folder structure/content of your |
@jogold absolutely! To generate this, I deleted the
After generating this tree I changed the CDK dependencies in I deleted the
You can see in the second tree that the second asset only contains the |
@rix0rrr this is also what I was talking about on Wednesday with building assets multiple times |
Yep, I figured as much. Feels like we can do two things:
I feel like the 2nd one is better. Thoughts? |
Seconds options sounds great. |
Agree |
Not sure we need an extra directory for this, leaving all the assets in the top level A note on the origin of this issue: caching works but the second time we come across the asset there's no aws-cdk/packages/@aws-cdk/core/lib/asset-staging.ts Lines 190 to 191 in ddff37c
because bundling was skipped. But since the aws-cdk/packages/@aws-cdk/core/lib/asset-staging.ts Lines 205 to 208 in ddff37c
so it falls back to a normal asset copy: aws-cdk/packages/@aws-cdk/core/lib/asset-staging.ts Lines 210 to 219 in ddff37c
|
Always stage assets in the outdir of the top level Stage/App. Closes aws#10877
Always stage assets in the outdir of the top level Stage/App. Closes aws#10877
Is it fair to consider this case as a logic bug? Seems that the fallback behavior here is an accident and not actually intended, right? |
correct |
In fact, the code should properly throw if bundling is used and we have a cache hit and we cannot find the already staged asset. |
I was thinking that too. Kk, good that we agree :). |
We stage assets into the Cloud Assembly directory. If there are multiple nested Cloud Assemblies, the same asset will be staged multiple times. This leads to an N-fold increase in size of the Cloud Assembly when used in combination with CDK Pipelines (where N is the number of stages deployed), and may even lead the Cloud Assembly to exceed CodePipeline's maximum artifact size of 250MB. Add the concept of an `assetOutdir` next to a regular Cloud Assembly `outDir`, so that multiple Cloud Assemblies can share an asset directory. As an initial implementation, the `assetOutdir` of nested Cloud Assemblies is just the regular `outdir` of the root Assembly. We are playing a bit fast and loose with the semantics of file paths across our code base; many properties just say "the path of X" without making clear whether it's absolute or relative, and if it's relative what it's relative to (`cwd()`? Or the Cloud Assembly directory?). Turns out that especially in dealing with assets, the answer is "can be anything" and things just happen to work out based on who is providing the path and who is consuming it. In order to limit the scope of the changes I needed to make I kept modifications to the `AssetStaging` class: * `stagedPath` now consistently returns an absolute path. * `relativeStagedPath()` a path relative to the Cloud Assembly or an absolute path, as appropriate. Related changes in this PR: - Refactor the *copying* vs. *bundling* logic in `AssetStaging`. I found the current maze of `if`s and member variable changes too hard to follow to convince myself the new code would be doing the right thing, so I refactored it to reduce the branching factor. - Switch the tests of `aws-ecr-assets` over to Jest using `nodeunitShim`. Fixes #10877, fixes #9627, fixes #9917.
) We stage assets into the Cloud Assembly directory. If there are multiple nested Cloud Assemblies, the same asset will be staged multiple times. This leads to an N-fold increase in size of the Cloud Assembly when used in combination with CDK Pipelines (where N is the number of stages deployed), and may even lead the Cloud Assembly to exceed CodePipeline's maximum artifact size of 250MB. Add the concept of an `assetOutdir` next to a regular Cloud Assembly `outDir`, so that multiple Cloud Assemblies can share an asset directory. As an initial implementation, the `assetOutdir` of nested Cloud Assemblies is just the regular `outdir` of the root Assembly. We are playing a bit fast and loose with the semantics of file paths across our code base; many properties just say "the path of X" without making clear whether it's absolute or relative, and if it's relative what it's relative to (`cwd()`? Or the Cloud Assembly directory?). Turns out that especially in dealing with assets, the answer is "can be anything" and things just happen to work out based on who is providing the path and who is consuming it. In order to limit the scope of the changes I needed to make I kept modifications to the `AssetStaging` class: * `stagedPath` now consistently returns an absolute path. * `relativeStagedPath()` a path relative to the Cloud Assembly or an absolute path, as appropriate. Related changes in this PR: - Refactor the *copying* vs. *bundling* logic in `AssetStaging`. I found the current maze of `if`s and member variable changes too hard to follow to convince myself the new code would be doing the right thing, so I refactored it to reduce the branching factor. - Switch the tests of `aws-ecr-assets` over to Jest using `nodeunitShim`. Fixes #10877, fixes #9627, fixes #9917. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
After updating to CDK 1.67.0, my custom asset bundling logic no longer works while using multiple pipelines. This bundling logic uses the
BundlingOptions
construct to move the lambda function handler to a nested folder structure. When executingcdk synth
on the new CDK version, only the first pipeline defined inapp.py
uses the bundling logic, and the other copies the code from the path without modifying it.If I revert back to 1.66.0 I no longer see this behavior, with no other code changes. Both pipelines produce the correct artifacts using the older CDK version.
Reproduction Steps
I have stripped down my code to provide a minimal example, and have posted it here.
Running
cdk synth
against this code produces 2 assemblies incdk.out
, where each assembly has an asset representing the lambda function. Thecdk synth
output can also be seen by running the pipeline on AWS, although it is easier to test locally.What did you expect to happen?
I expected both assets in the
cdk.out
directory to contain the bundled directory structures.Both assets should contain a directory structure that looks like the following:
What actually happened?
Only the first pipeline declared in
app.py
has the bundled directory structure.The second asset is bundled using the default settings, which ignore the custom bundling logic.
Environment
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: