Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(servicecatalog): Add Product Stack Asset Support #21508

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions packages/@aws-cdk/aws-s3-deployment/lib/bucket-deployment.ts
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,13 @@ export interface BucketDeploymentProps {
*/
readonly destinationKeyPrefix?: string;

/**
* If this is set, the zip file will be synced to the destination S3 bucket and unzipped.
* If false, the file will remain zipped in the destination bucket.
* @default true
*/
readonly unzipFile?: boolean;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#8065 discusses alternatives for deploying a zipped asset. Specifically #8065 (comment)

Have you considered alternatives and why have you chosen this approach?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, a separate PR will be drafted for the s3-deployment changes that clarifies design considerations and alternatives. The feature addressed by this PR relies on these s3 changes and given that we are working against a deadline, we submitted this PR as a draft to get early feedback solely on the changes made to aws-servicecatalog. Hope this helps to clarify.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrgrain Just to clarify, are you asking for design considerations and alternatives for the entire PR or just for the S3 changes?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say the entire PR. This is a pretty big deviation from how we handle assets in the CDK. It looks like you are essentially creating a shadow assets mechanism that we will have to support going forward.


/**
* If this is set, matching files or objects will be excluded from the deployment's sync
* command. This can be used to exclude a file from being pruned in the destination bucket.
Expand Down Expand Up @@ -349,6 +356,7 @@ export class BucketDeployment extends Construct {
SourceMarkers: hasMarkers ? sources.map(source => source.markers ?? {}) : undefined,
DestinationBucketName: props.destinationBucket.bucketName,
DestinationBucketKeyPrefix: props.destinationKeyPrefix,
UnzipFile: props.unzipFile,
RetainOnDelete: props.retainOnDelete,
Prune: props.prune ?? true,
Exclude: props.exclude,
Expand Down
20 changes: 12 additions & 8 deletions packages/@aws-cdk/aws-s3-deployment/lib/lambda/index.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ def cfn_error(message=None):
source_markers = props.get('SourceMarkers', None)
dest_bucket_name = props['DestinationBucketName']
dest_bucket_prefix = props.get('DestinationBucketKeyPrefix', '')
unzip_file = props.get('UnzipFile', 'true') == 'true'
retain_on_delete = props.get('RetainOnDelete', "true") == "true"
distribution_id = props.get('DistributionId', '')
user_metadata = props.get('UserMetadata', {})
Expand Down Expand Up @@ -113,7 +114,7 @@ def cfn_error(message=None):
aws_command("s3", "rm", old_s3_dest, "--recursive")

if request_type == "Update" or request_type == "Create":
s3_deploy(s3_source_zips, s3_dest, user_metadata, system_metadata, prune, exclude, include, source_markers)
s3_deploy(s3_source_zips, s3_dest, user_metadata, system_metadata, prune, exclude, include, source_markers, unzip_file)

if distribution_id:
cloudfront_invalidate(distribution_id, distribution_paths)
Expand All @@ -130,7 +131,7 @@ def cfn_error(message=None):

#---------------------------------------------------------------------------------------------------
# populate all files from s3_source_zips to a destination bucket
def s3_deploy(s3_source_zips, s3_dest, user_metadata, system_metadata, prune, exclude, include, source_markers):
def s3_deploy(s3_source_zips, s3_dest, user_metadata, system_metadata, prune, exclude, include, source_markers, unzip_file):
# list lengths are equal
if len(s3_source_zips) != len(source_markers):
raise Exception("'source_markers' and 's3_source_zips' must be the same length")
Expand All @@ -154,12 +155,15 @@ def s3_deploy(s3_source_zips, s3_dest, user_metadata, system_metadata, prune, ex
s3_source_zip = s3_source_zips[i]
markers = source_markers[i]

archive=os.path.join(workdir, str(uuid4()))
logger.info("archive: %s" % archive)
aws_command("s3", "cp", s3_source_zip, archive)
logger.info("| extracting archive to: %s\n" % contents_dir)
logger.info("| markers: %s" % markers)
extract_and_replace_markers(archive, contents_dir, markers)
if (unzip_file):
archive=os.path.join(workdir, str(uuid4()))
logger.info("archive: %s" % archive)
aws_command("s3", "cp", s3_source_zip, archive)
logger.info("| extracting archive to: %s\n" % contents_dir)
logger.info("| markers: %s" % markers)
extract_and_replace_markers(archive, contents_dir, markers)
else:
aws_command("s3", "cp", s3_source_zip, contents_dir)

# sync from "contents" to destination

Expand Down
30 changes: 30 additions & 0 deletions packages/@aws-cdk/aws-servicecatalog/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,36 @@ const product = new servicecatalog.CloudFormationProduct(this, 'Product', {
});
```

You can now reference assets in a Product Stack. For example, we can add a handler to a Lambda function or a S3 Asset directly from a local asset file.

```ts
import * as lambda from '@aws-cdk/aws-lambda';
import * as cdk from '@aws-cdk/core';

class LambdaProduct extends servicecatalog.ProductStack {
constructor(scope: Construct, id: string) {
super(scope, id);

new lambda.Function(this, 'LambdaProduct', {
runtime: lambda.Runtime.PYTHON_3_9,
code: lambda.Code.fromAsset("./assets"),
handler: 'index.handler'
});
}
}

const product = new servicecatalog.CloudFormationProduct(this, 'Product', {
productName: "My Product",
owner: "Product Owner",
productVersions: [
{
productVersionName: "v1",
cloudFormationTemplate: servicecatalog.CloudFormationTemplate.fromProductStack(new LambdaProduct(this, 'LambdaFunctionProduct')),
},
],
});
```

### Creating a Product from a stack with a history of previous versions

The default behavior of Service Catalog is to overwrite each product version upon deployment.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import * as s3_assets from '@aws-cdk/aws-s3-assets';
import { IBucket } from '@aws-cdk/aws-s3';
import { Construct } from 'constructs';
import { hashValues } from './private/util';
import { ProductStack } from './product-stack';
Expand Down Expand Up @@ -46,9 +47,15 @@ export abstract class CloudFormationTemplate {
*/
export interface CloudFormationTemplateConfig {
/**
* The http url of the template in S3.
*/
* The http url of the template in S3.
*/
readonly httpUrl: string;

/**
* The S3 bucket containing product stack assets.
* @default -
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please? :)

*/
readonly assetBucket?: IBucket;
}

/**
Expand Down Expand Up @@ -108,6 +115,7 @@ class CloudFormationProductStackTemplate extends CloudFormationTemplate {
public bind(_scope: Construct): CloudFormationTemplateConfig {
return {
httpUrl: this.productStack._getTemplateUrl(),
assetBucket: this.productStack._getAssetBucket(),
};
}
}
34 changes: 32 additions & 2 deletions packages/@aws-cdk/aws-servicecatalog/lib/portfolio.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
import * as iam from '@aws-cdk/aws-iam';
import * as sns from '@aws-cdk/aws-sns';
import * as cdk from '@aws-cdk/core';
import { Construct } from 'constructs';
import { IBucket } from '@aws-cdk/aws-s3';
import { Construct, IConstruct } from 'constructs';
import { MessageLanguage } from './common';
import {
CloudFormationRuleConstraintOptions, CommonConstraintOptions,
Expand Down Expand Up @@ -81,6 +82,11 @@ export interface IPortfolio extends cdk.IResource {
*/
addProduct(product: IProduct): void;

/**
* Grants all shared accounts read permissions to each asset bucket.
*/
giveAssetBucketsReadToSharedAccounts(): void;

/**
* Associate Tag Options.
* A TagOption is a key-value pair managed in AWS Service Catalog.
Expand All @@ -105,7 +111,7 @@ export interface IPortfolio extends cdk.IResource {
* @param product A service catalog product.
* @param options options for the constraint.
*/
constrainCloudFormationParameters(product:IProduct, options: CloudFormationRuleConstraintOptions): void;
constrainCloudFormationParameters(product: IProduct, options: CloudFormationRuleConstraintOptions): void;

/**
* Force users to assume a certain role when launching a product.
Expand Down Expand Up @@ -155,6 +161,8 @@ abstract class PortfolioBase extends cdk.Resource implements IPortfolio {
public abstract readonly portfolioArn: string;
public abstract readonly portfolioId: string;
private readonly associatedPrincipals: Set<string> = new Set();
private readonly assetBuckets: Set<IBucket> = new Set<IBucket>();
private readonly sharedAccounts: String[] = [];

public giveAccessToRole(role: iam.IRole): void {
this.associatePrincipal(role.roleArn, role.node.addr);
Expand All @@ -169,10 +177,14 @@ abstract class PortfolioBase extends cdk.Resource implements IPortfolio {
}

public addProduct(product: IProduct): void {
if (product.assetBucket) {
this.assetBuckets.add(product.assetBucket);
}
AssociationManager.associateProductWithPortfolio(this, product, undefined);
}

public shareWithAccount(accountId: string, options: PortfolioShareOptions = {}): void {
this.sharedAccounts.push(accountId);
const hashId = this.generateUniqueHash(accountId);
new CfnPortfolioShare(this, `PortfolioShare${hashId}`, {
portfolioId: this.portfolioId,
Expand Down Expand Up @@ -220,6 +232,14 @@ abstract class PortfolioBase extends cdk.Resource implements IPortfolio {
AssociationManager.deployWithStackSets(this, product, options);
}

public giveAssetBucketsReadToSharedAccounts() {
imanolympic marked this conversation as resolved.
Show resolved Hide resolved
for (const bucket of this.assetBuckets) {
for (const accountId of this.sharedAccounts) {
bucket.grantRead(new iam.AccountPrincipal(accountId));
}
Comment on lines +237 to +239
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not exactly sure what the purpose of this method is, but it might be easier/better to use a CompositePrincipal instead for the policy.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The goal is for the portfolio to collect a set of S3 Asset buckets and a list of shared accounts. Given that the portfolio does not have an onSynthesize method, we created an aspect that will take each bucket and grant read to all accounts. Essentially, in the scenario where the portfolio contains three asset buckets and two accounts, we would want the portfolio to iterate through each bucket and grant read permissions to the two accounts. If CompositePrincipal can eliminate that second for loop that would be great, will definitely look into it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this look? Hopefully it's more readable.

    for (const bucket of this.assetBuckets) {
      const compositeAccountPrincipals = new iam.CompositePrincipal(...this.sharedAccounts.map(
        (accountId) => new iam.AccountPrincipal(accountId))
      );
      bucket.grantRead(compositeAccountPrincipals);
    }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question here. We have the following scenario: our stack creates multiple porfolios containing one product. Each portfolio is shared with other account but sometimes the products have exaclty the same asset (reusable lambda in our case). Would that mean that for each portfolio there will be a separate bucket created?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question, this was actually the initial design goal - one product stack per portfolio. I considered working with the cdk bootstrap bucket but the idea of giving cross account access to a bucket containing all of the user's assets did not seem to adhere to best security practices.

Ideally, we would like to reduce the share scope as much as possible. Portfolios seemed like a great way to accomplish this until we realized the timeline of events would not allow for us to create an asset bucket in the portfolio and pass it down to the product stack? Why? Well, the product stack needs the s3 bucket containing the asset buckets at time of creation so that the product stack synthesizer is able to pass back the expected location of the asset in its public addFileAsset(_asset: cdk.FileAssetSource): cdk.FileAssetLocation method.

Portfolio constructor
ProductStack constructor
ProductStackSynthesizer::bind()
ProductStackSynthesizer::addFileAsset()
ProductStack::addFileAssetToParentSynthesizer() 
	This is where we need the product stack to know the name of the bucket
CloudFormationProductStackTemplate constructor
CloudFormationProduct constructor
CloudFormationProductStackTemplate::bind()
Portfolio::addProduct()
Portfolio::shareWithAccount()
ProductStackSynthesizer::synthesize()

We could technically make the product stack asset bucket external facing and have the user create the bucket in their application stack. This would require allowing product stack props so that we can take the asset bucket as a prop and proceed from there. This approach was less favored by our team given that it would force users to interact with their product stack differently from the way they already do (if they want to use assets). Here is what this user experience would look like:

export class HelloServerlessStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Create development bucket
    const devBucket = new ProductStackAssetBucket(this, "ProductStackAssetBucket");

    // Create a portfolio
    const portfolio = new sc.Portfolio(this, 'DevToolsPortfolio', {
      displayName: 'DevTools', 
      providerName: 'IT', 
      assetBucket: devBucket, 
    });

    // Create a Lambda product from a Product Stack
    const product = new sc.CloudFormationProduct(this, 'HelloServerless', {
      productName: "HelloServerless",
      owner: "IT",
      productVersions: [
        {
           cloudFormationTemplate: sc.CloudFormationTemplate.fromProductStack(
            new HelloServerlessProduct(this, 'HelloServerlessProduct', {assetBucket: devBucket})),
          productVersionName: "v1",
          description: "A simple REST API backed by a hello Lambda function"
        },
      ],
    });

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I like this approach. Would it be possible to combine Your's approach with this You introduced here, but the assetBucket property would be optional? This would be a good solution for the use case we are facing with my team.
The other one would simply build over 100 buckets (soft quota).
I imagine we could create this AssetsBucket in a separate stack that would be deployed right before the portfolio stack and this would be in our case not a security concern. Actually our current apprach uses the cdk assets bucket, so I actually understand Your's considerations about security, because we also had this discussions already.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With even more trickery, you could also return a Lazy.string() as bucketName in the Synthesizer, and copy in the the real bucket name when the ProductStack is being synthesized in the context of a Portfolio.

(Btw. Lazy.string() will lead to every ProductStack instance only being usable in the context of a single Portfolio. Not sure how desirable or how bad that is, but you might want to add validation to make sure users don't get it wrong. Alternatively, Lazy.uncachedString() might help, but needs a test to confirm that it works if that's what you want to support)


I considered working with the cdk bootstrap bucket but the idea of giving cross account access to a bucket containing all of the user's assets did not seem to adhere to best security practices.

Good call 😉. I like the approach you're taking here.

Portfolios seemed like a great way to accomplish this until we realized the timeline of events would not allow for us to create an asset bucket in the portfolio and pass it down to the product stack

In software, things are rarely as cut-and-dried as "it's not possible". Most things are possible (given an appropriate amount of work). You've identified the ideal customer experience already; now is the time to fight to see if you can achieve it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I like this approach. Would it be possible to combine Your's approach with this You introduced here, but the assetBucket property would be optional? This would be a good solution for the use case we are facing with my team. The other one would simply build over 100 buckets (soft quota). I imagine we could create this AssetsBucket in a separate stack that would be deployed right before the portfolio stack and this would be in our case not a security concern. Actually our current apprach uses the cdk assets bucket, so I actually understand Your's considerations about security, because we also had this discussions already.

Great, thanks for the feedback. We changed our implementation to support the user passing in an optional product stack asset bucket. The user will also have the ability to pass in a name for the assets bucket (otherwise, we'll generate the name for you 🙂).

}
}

/**
* Associate a principal with the portfolio.
* If the principal is already associated, it will skip.
Expand Down Expand Up @@ -336,6 +356,7 @@ export class Portfolio extends PortfolioBase {
if (props.tagOptions !== undefined) {
this.associateTagOptions(props.tagOptions);
}
cdk.Aspects.of(this).add(new PortfolioAssetBucketSharer());
imanolympic marked this conversation as resolved.
Show resolved Hide resolved
}

protected generateUniqueHash(value: string): string {
Expand All @@ -348,3 +369,12 @@ export class Portfolio extends PortfolioBase {
InputValidator.validateLength(this.node.path, 'portfolio description', 0, 2000, props.description);
}
}

class PortfolioAssetBucketSharer implements cdk.IAspect {
public visit(node: IConstruct): void {
if (node instanceof Portfolio) {
const portfolio = node as Portfolio;
portfolio.giveAssetBucketsReadToSharedAccounts();
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import * as cdk from '@aws-cdk/core';
import { IBucket, BlockPublicAccess, Bucket } from '../../../aws-s3';
import { BucketDeployment, ISource, Source } from '../../../aws-s3-deployment';
import { Construct } from 'constructs';
import { hashValues } from './util';

/**
* A Service Catalog product stack asset bucket, which is similar in form to an Amazon S3 bucket.
* You can store multiple product stack assets and collectively deploy them to S3.
*/
export class ProductStackAssetBucket extends Construct {
private readonly bucketName: string;
private readonly bucket: IBucket;
private readonly assets: ISource[];

constructor(scope: Construct, id: string) {
super(scope, id);

this.bucketName = this.generateBucketName(id);

this.bucket = new Bucket(scope, `${id}S3Bucket`, {
bucketName: this.bucketName,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
});

this.assets = [];
}

/**
* Fetch the S3 bucket.
*
* @internal
*/
public getBucket(): IBucket {
return this.bucket;
}

/**
* Generate unique name for S3 bucket.
*
* @internal
*/
private generateBucketName(id: string): string {
const accountId = cdk.Stack.of(this).account;
if (cdk.Token.isUnresolved(accountId)) {
throw new Error('CDK Account ID must be defined in the application environment');
}
return `product-stack-asset-bucket-${accountId}-${hashValues(id)}`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use the cloudformation intrinsic ${AWS::AccountId} instead? Do we need to know the bucketname at synth time?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind clarifying your first question? My understanding is that buckets share a global namespace and thus, the concept of regions doesn't necessary apply.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As for garbage collection, the great thing about having a deterministic name for the bucket is that the user will be forced to delete the buckets in the case where they destroy and try to re deploy their stacks. This way, users will avoid the scenario of having s3 buckets pile up. This being said, the only garbage collection concern is one where a user destroys their stack (with the intention of never deploying it again) and forgets to delete the asset buckets on S3.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use the cloudformation intrinsic ${AWS::AccountId} instead? Do we need to know the bucketname at synth time?

Yes, we are required to know the bucket name at synth time given that product stack synthesizer is required to return an expected location for the asset, and the entire appeal of product stacks is that they altogether skip deployment (they simply synthesize a template that gets uploaded to the CDK bootstrap bucket of the user).

This being said, we tried using ${AWS::AccountId} but the issue with doing so is that it contains a token rather than the explicit account id string. At deploy time, the token is resolved and the bucket is created. The issue comes before that, when the product stack synthesizer tries to return the expected asset location. Instead of returning product-stack-asset-bucket-1234 for the bucket name, it will return product-stack-asset-bucket-${TOKEN[Bucket.Name.1234]}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind clarifying your first question? My understanding is that buckets share a global namespace and thus, the concept of regions doesn't necessary apply.

A global namespace does not mean that it is a global service. If I publish the assets to a bucket in us-east-1, but I consume the assets in a stack deployment in us-east-2 I've created a cross region dependency. What if us-east-1 is experiencing an outage and I need to deploy my failover stack in us-east-2? I won't be able to deploy because I have a dependency on us-east-1.

As for garbage collection...

Maybe I misunderstand how this feature works. I'm more talking about garbage collection for the assets themselves. When I change the lambda code and deploy, does the new asset go to the same bucket? Overtime this might lead to a lot of outdated assets.

}

/**
* Fetch the expected S3 location of an asset.
*
* @internal
*/
public addAsset(asset: cdk.FileAssetSource): cdk.FileAssetLocation {
const assetPath = './cdk.out/' + asset.fileName;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is making too many assumptions.

Instead, I would hand the asset to the underlying stack's StackSynthesizer, and hand the result of that to the BucketDeployment (as a Source.s3()).

this.assets.push(Source.asset(assetPath));

const bucketName = this.bucketName;
const s3Filename = asset.fileName?.split('.')[1] + '.zip';
const objectKey = `${s3Filename}`;
const s3ObjectUrl = `s3://${bucketName}/${objectKey}`;
const httpUrl = `https://s3.${bucketName}/${objectKey}`;

return { bucketName, objectKey, httpUrl, s3ObjectUrl, s3Url: httpUrl };
}

/**
* Deploy all assets to S3.
*
* @internal
*/
public deployAssets() {
imanolympic marked this conversation as resolved.
Show resolved Hide resolved
if (this.assets.length > 0) {
new BucketDeployment(this, 'AssetsBucketDeployment', {
sources: this.assets,
destinationBucket: this.bucket,
unzipFile: false,
});
}
}
}
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
import * as cdk from '@aws-cdk/core';
import { ProductStack } from '../product-stack';
import { ProductStackAssetBucket } from './product-stack-asset-bucket';
import { hashValues } from './util';

/**
* Deployment environment for an AWS Service Catalog product stack.
Expand All @@ -7,6 +10,7 @@ import * as cdk from '@aws-cdk/core';
*/
export class ProductStackSynthesizer extends cdk.StackSynthesizer {
private stack?: cdk.Stack;
private assetBucket?: ProductStackAssetBucket;

public bind(stack: cdk.Stack): void {
if (this.stack !== undefined) {
Expand All @@ -16,17 +20,28 @@ export class ProductStackSynthesizer extends cdk.StackSynthesizer {
}

public addFileAsset(_asset: cdk.FileAssetSource): cdk.FileAssetLocation {
throw new Error('Service Catalog Product Stacks cannot use Assets');
if (!this.stack) {
throw new Error('You must call bindStack() first');
}

if (!this.assetBucket) {
const parentStack = (this.stack as ProductStack)._getParentStack();
this.assetBucket = new ProductStackAssetBucket(parentStack, `ProductStackAssetBucket${hashValues(this.stack.stackName)}`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hashing this doesn't seem necessary. Why not have the product name here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You've picked the scope of asset buckets to be individual ProductStacks. What's the reason for picking that over, say, an asset bucket per Portfolio? Given that the entire portfolio is shared with the same groups of accounts anyway, a bucket per portfolio seems more desirable?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You've picked the scope of asset buckets to be individual ProductStacks. What's the reason for picking that over, say, an asset bucket per Portfolio? Given that the entire portfolio is shared with the same groups of accounts anyway, a bucket per portfolio seems more desirable?

Check out this comment: #21508 (comment)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hashing this doesn't seem necessary. Why not have the product name here?

Let's take a look at the following scenario:

    const product = new sc.CloudFormationProduct(this, 'HelloServerless', {
      productName: "HelloServerless",
      owner: "IT",
      productVersions: [
        {
          cloudFormationTemplate: sc.CloudFormationTemplate.fromProductStack(new ProductStack1(this, 'ProductStack1'),
          productVersionName: "v1",
          description: "A simple REST API backed by a hello Lambda function"
        },
        {
          cloudFormationTemplate: sc.CloudFormationTemplate.fromProductStack(new ProductStack2(this, 'ProductStack2')
          ),
          productVersionName: "v2",
          description: "A simple REST API backed by a goodbye Lambda function"
        },
      ],
    });

If we hashed by product name, the product stack asset buckets for both ProductStack1 and ProductStack2 would share the same id and thus error out. To avoid this, we can hash by parent stack + product stack which is what this.stack.StackName will give us.

(this.stack as ProductStack)._setAssetBucket(this.assetBucket.getBucket());
}

return this.assetBucket.addAsset(_asset);
}

public addDockerImageAsset(_asset: cdk.DockerImageAssetSource): cdk.DockerImageAssetLocation {
throw new Error('Service Catalog Product Stacks cannot use Assets');
throw new Error('Service Catalog Product Stacks cannot use DockerImageAssets');
}

public synthesize(session: cdk.ISynthesisSession): void {
if (!this.stack) {
throw new Error('You must call bindStack() first');
}
this.assetBucket?.deployAssets();
// Synthesize the template, but don't emit as a cloud assembly artifact.
// It will be registered as an S3 asset of its parent instead.
this.synthesizeStackTemplate(this.stack, session);
Expand Down
Loading