Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce number of parameters used by assets #3463

Closed
1 of 5 tasks
alexdilley opened this issue Jul 29, 2019 · 75 comments
Closed
1 of 5 tasks

Reduce number of parameters used by assets #3463

alexdilley opened this issue Jul 29, 2019 · 75 comments
Assignees
Labels
@aws-cdk/assets Related to the @aws-cdk/assets package @aws-cdk/aws-cloudformation Related to AWS CloudFormation @aws-cdk/core Related to core CDK functionality effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. in-progress This issue is being actively worked on. p1

Comments

@alexdilley
Copy link

alexdilley commented Jul 29, 2019

  • I'm submitting a ...

    • πŸͺ² bug report
    • πŸš€ feature request
    • πŸ“š construct library gap
    • ☎️ security issue or vulnerability => Please see policy
    • ❓ support request => Please see note at the top of this template.
  • What is the current behavior?

CloudFormation stacks are limited to 60 parameters; CDK produces a seemingly excessive number of parameters, thus easy to breach limitation.

  • What is the expected behavior (or behavior of feature suggested)?

To perhaps instead use mappings, as suggested in the CF docs.

  • What is the motivation / use case for changing the behavior or adding this feature?

To be able to define, for example, more than 20 Lambda functions in a stack: currently – for each function – CDK generates one parameter for its artifact hash, one for its S3 location, and one for its version.

@alexdilley alexdilley added the needs-triage This issue or PR still needs to be triaged. label Jul 29, 2019
@NGL321 NGL321 added @aws-cdk/core Related to core CDK functionality feature-request A feature should be added or improved. @aws-cdk/aws-cloudformation Related to AWS CloudFormation and removed needs-triage This issue or PR still needs to be triaged. labels Jul 29, 2019
@NGL321
Copy link
Contributor

NGL321 commented Jul 29, 2019

Hi @alexdilley,

Thank you for reaching out!
We are aware of this gap, and will address it when able. Someone will update this issue when that happens.

@eladb eladb assigned eladb and unassigned eladb Aug 12, 2019
@niranjan94
Copy link

Hi @NGL321,

Thanks for your response.

We, Just started using CDK and ended up with this issue. We have 21 lambdas and the parameter count for it comes to more than 63. This makes it impossible for us to use CDK right now to deploy as cloudformation doesn't allow it. Anyway to make CDK inline the parameters (bucket/key/version hash) within the template directly instead of passing these as parameters ?

@niranjan94
Copy link

Current workound that I have been using is

  1. synthesizing the template with cdk
  2. then modifying the template to include s3 bucket and uri inline in the template instead of parameters
  3. then using aws cloudformation deploy to deploy instead of cdk deploy

Code snippet incase anyone else needs to use it:
https://gist.github.com/niranjan94/92f2636a29f09bd6cc53085951e78046

@eladb
Copy link
Contributor

eladb commented Aug 27, 2019

The long-term solution should be to have only a single parameter per CDK asset. It will still require a parameter per asset since asset locations change based on their source hash but it will reduce the number of parameters by a factor of x3

@eladb eladb changed the title Parameter count limitation Reduce number of parameters used by assets Aug 28, 2019
@eladb eladb added the @aws-cdk/assets Related to the @aws-cdk/assets package label Aug 28, 2019
@sublimemm
Copy link

sublimemm commented Sep 6, 2019

The long-term solution should be to have only a single parameter per CDK asset. It will still require a parameter per asset since asset locations change based on their source hash but it will reduce the number of parameters by a factor of x3

I do not see why all parameters cannot be placed into a map parameter. They must have unique keys/ids already since parameters cannot have duplicated names, correct?

I do not think the community would consider a stack limited to 30 assets to be the long term solution. 30 assets, assuming the users needed no parameters of their own (a pretty faulty assumption imho).

@dehli
Copy link
Contributor

dehli commented Sep 9, 2019

@sublimemm I think that would also be the best way; however I'm not sure how it could be a map. CloudFormation doesn't seem to support pulling out values from a map unless it's specifically a Mapping. Is there some function I'm missing?

What I've seen is having a long string as the parameter (joining all the parameters with a special character such as |) and then using a combination of !Split and !Select. CDK would have to maintain the appropriate indices for each asset with this solution.

@sublimemm
Copy link

@eladb You said the one parameter per asset would reduce the number of parameters by a factor of 3, but I think it will actually do much more. Our stack has 5 assets and between them 63 parameters. So reducing it to 5 would be a much more palatable solution.

It's unclear to me when the CDK decides something is a new asset or not... but I thought your original suggestion was one parameter per construct/lambda/etc. One per asset seems tenable, assuming the cdk is diligent with splitting the stack into assets.

@dehli
Copy link
Contributor

dehli commented Sep 9, 2019

@sublimemm I just looked at my template and it looks like it follows the 3 parameters per asset. Maybe you have something else adding parameters?

@sublimemm
Copy link

After some digging, it's all about the lambdas. Its adding 3 per lambda, we have tons of lambdas, there are many per asset.

@sublimemm
Copy link

image

Here are some examples, you can see they're duplicating the bucket (obviously not needed since all CDK stacks are deployed to the same bucket (the toolkit bucket)).

They're also duplicating parameters if multiple lambdas share an asset bundle (we have tons of them that share the same asset path).

@eladb
Copy link
Contributor

eladb commented Sep 9, 2019

We are looking into improving this as part of our work on CI/CD. The current thinking is to actually reduce the number of asset parameters to zero by using a well-known convention-based physical names for the bootstrapping resources (bucket/ECR repository) and the source hash as the key (S3 object key/docker image tag). This will basically mean that we don't need any degrees of freedom during deployment. I am curious about people's thoughts on this...

@sublimemm
Copy link

We are looking into improving this as part of our work on CI/CD. The current thinking is to actually reduce the number of asset parameters to zero by using a well-known convention-based physical names for the bootstrapping resources (bucket/ECR repository) and the source hash as the key (S3 object key/docker image tag). This will basically mean that we don't need any degrees of freedom during deployment. I am curious about people's thoughts on this...

I think that is a great solution.

eladb pushed a commit that referenced this issue Oct 3, 2019
The `NestedStack` construct is a special kind of `Stack`. Any resource defined within its scope will be included in a separate template from the parent stack. 

The template for the nested stack is synthesized into the cloud assembly but not treated as a deployable unit but rather as a file asset. This will cause the CLI to upload it to S3 and wire it's coordinates to the parent stack so we can reference its S3 URL.

To support references between the parent stack and the nested stack, we abstracted the concept of preparing cross references by inverting the control of `consumeReference` from the reference object itself to the `Stack` object. This allows us to override it at the `NestedStack` level (through `prepareCrossReference`) and mutate the token accordingly. 

When an outside resource is referenced within the nested stack, it is wired through a synthesized CloudFormation parameter. When a resource inside the nested stack is referenced from outside, it will be wired through a synthesized CloudFormation output. This works for arbitrarily deep nesting.

When an asset is referenced within a nested stack, it will be added to the top-level stack and wired through the asset parameter reference (like any other reference).

Fixes #239
Fixes #395
Related #3437 
Related #1439 
Related #3463
@dsmrt
Copy link

dsmrt commented Nov 16, 2019

To add to this...
I'm hitting my max parameters (60) while using CDK assets for AppSync FunctionConfigs and Resolvers. I have about 33 resolvers and it looks like CDK creates 3 parameters per asset (as mentioned above). Asset parameters to zero would help me a ton!

@tekdroid
Copy link

tekdroid commented Nov 21, 2019

I'm attempting to migrate from raw CF YAML templates to the CDK, immediately hit this issue. It generated 120 parameters first attempt. Immediately blocked by this. I just spent 3 days converting templates thinking "surely cdk handles maximum limits". Very very discouraging to see this happen before ever being able to deploy our structure. We have 10s of thousands of lines of config and I want to move the team away from that.

Would love to see some progress made toward this fix.

In the meantime, anyone have a workaround aside from the one above? Would re-structuring the hierarchy in some way help? I've tried to create extra layers but they then just pass the parameters through the layer and the problem remains.

@niranjan94
Copy link

@tekdroid yep. I tried restructuring too. But didn't work out. So I'm still using the workaround I told about above. (Inlining all the asset paths into the template) and using aws cloudformation deploy to deploy instead of cdk deploy

Updated script to do the inlining in case you are interested:

https://gist.github.com/niranjan94/92f2636a29f09bd6cc53085951e78046

@tekdroid
Copy link

@niranjan94 thanks for the updated script, I appreciate that! I'll check it out and see if I can use this method for now. Cheers!

@tekdroid
Copy link

tekdroid commented Nov 22, 2019

@niranjan94 Your script is not working as expected for me. Would you mind helping me out? It removed all of the parameters, but didn't replace their usages.

Is there another medium by which we could chat for a moment? If you can't that's fine, I can reverse engineer this script.

Mainly I just wanted to see if I'm running the synthesis of the cdk output the same way you are. Looks like you're looking for Resource -> Properties -> Content, but I don't have a node at that path. I have Resource -> Properties -> TemplateURL, which is a join function for the s3 bucket/key parameters that were removed.

EDIT: No worries, after second thought I really don't want to go down this road. I'll continue our teams conversion when this is officially fixed inside the CDK.

@hoegertn
Copy link
Contributor

@eladb I really like reducing the number to zero and using asset hashes. For the toolkit bucket, I would suggest an Fn::ImportValue instead of convention-based names to prevent name squatting attacks.

@eladb
Copy link
Contributor

eladb commented Dec 18, 2019

We can't use Fn::ImportValue because we need to know before deployment where to upload assets (see the cdk-assets RFC).

Copy @rix0rrr

@eladb
Copy link
Contributor

eladb commented Jun 14, 2020

@rix0rrr Another issue: When using ContainerImage.fromAsset the generated CFN template looks like this:


            "Image": {

              "Fn::Sub": {

                "Fn::Join": [

                  "",

                  [

                    "<id>.dkr.ecr.eu-central-1.",

                    {

                      "Ref": "AWS::URLSuffix"

                    },

                    "/cdk-<my-asset-ecr>:<hash>"

                  ]

                ]

              }

            },

This is invalid CloudFormation as Join is not valid inside Sub:

E1019 Sub should be a string or array of 2 items for Resources/TaskDef3BF4F22B/Properties/ContainerDefinitions/0/Image/Fn::Sub

Am I doing something wrong or is this a bug in the new asset system?

EDIT: One more: the S3 Bucket Deployment construct does not work anymore as the custom lambda does not have permissions to use the KMS key of the asset bucket.

This seems like a bug. Can you please raise a separate issue?

@eladb
Copy link
Contributor

eladb commented Jun 14, 2020

THIS IS GLORIOUS - thanks heaps! Works a charm.

Here's someone popping you all a well deserved beer:

image

ps. When this gets documented for general use, there should be a a big warning bit about --cloudformation-execution-policies as I initially left it off (why?!?!) and that left me to go through a tunnel of pain to return to the state CloudFormation was before I attempted it. Following your instructions worked perfectly.

Thanks so much!

Mind raising a separate issue for that?

@brennick
Copy link

@rix0rrr will the new asset system be the default system ever or will we always need to adjust cdk.json?

@damonmcminn
Copy link

THIS IS GLORIOUS - thanks heaps! Works a charm.
Here's someone popping you all a well deserved beer:
image
ps. When this gets documented for general use, there should be a a big warning bit about --cloudformation-execution-policies as I initially left it off (why?!?!) and that left me to go through a tunnel of pain to return to the state CloudFormation was before I attempted it. Following your instructions worked perfectly.
Thanks so much!

Mind raising a separate issue for that?

Absolutely.

@brennick
Copy link

brennick commented Jul 2, 2020

After running the new bootstrap command, I'm getting the following error when doing cdk deploy

Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): Roles may not be assumed by root accounts.

Any idea what I'm doing wrong?

EDIT: It was due to using root credentials in our CLI. Creating a IAM User resolved the issue.

@eladb
Copy link
Contributor

eladb commented Jul 5, 2020

@rix0rrr Is this documented in the CDK pipelines docs?

@mattiLeBlanc
Copy link

mattiLeBlanc commented Jul 6, 2020

@rix0rrr I am having a similar issue:

Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): User: arn:aws:sts::[accountID]:assumed-role/AWSReservedSSO_AdministratorAccess_ddb.../[USER]is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::[accountId]:role/cdk-hnb659fds-deploy-role-[accountId]-ap-southeast-2

So the first time I bootstrap and deploy, it works. But any sub-sequential deployment I get this error.

Only when I delete my S3 bucket and bootstrap again, can I deploy. So I can only deploy one time locally before I have to do this again.

However this wouldn't work for us in deployment because it would be madness to remove the staging bucket after each deploy.
I am kind stuck now with the option to either downgrade and create sub stacks for my more than 20 lambdas, or create a postdeploy cleanup that removes that staging bucket so we can do more than 1 deploy :-/

My CDK version is 1.49.1
We use SSO and have programmatic access for our local deployment via the aws credential file.

@mattiLeBlanc
Copy link

mattiLeBlanc commented Jul 14, 2020

@NGL321 So we still have issues with deploying our stack (appsync api with lambda's, more than 20 = 60+ params) with the new CDK synthesis. The error is as mentioned in my prev post above.

Our accounts are DEV, STAGING And PROD and we have SSO users with read/write permissions and we use the ~/.aws/credentials file with our token to local deploy to our personal version of the API in the DEV account.
So when I was the first to use the new param system, I bootstrapped it with my account. This created/updated the CDKToolkit stack and added the S3 bucket to our DEV account.
All good...
But then when my colleague also started to use the new CDK he couldn't deploy, so he had to remove the CDKToolkit and bootstrap again. Now he could deploy...but surprise, I coulnd't again.
To make things even worse, when we pushed our changes to Bitbucket, the pipeline tries to deploy the DEV version of the API to the DEV account (in which we already bootstrapped), so it was complaining that the S3 bucket already exists:

StagingBucket Requested update requires the creation of a new physical resource; hence creating one.
  7/11 | 1:28:09 AM | UPDATE_FAILED        | AWS::S3::Bucket       | StagingBucket cdk-hnb659fds-assets-[account number]-ap-southeast-2 already exists
  5/11 | 1:28:09 AM | UPDATE_ROLLBACK_IN_P | AWS::CloudFormation::Stack | CDKToolkit The following resource(s) failed to update: [StagingBucket]. 
  5/11 | 1:28:12 AM | UPDATE_COMPLETE      | AWS::S3::Bucket       | StagingBucket 

So this becomes a bit hairy.
So:
A) should we move our all personal accounts to another env called USER so that DEV solely has one API and one bucket for deployment, and we have another bucket for our personal API deployment in the USER account?
B) should the bootstrap script be more robust to check if a bucket already exists so it doesn't have to recreate it?
C) why are we getting these trust issues. I am not sure if we can set a trust relationship on an SSO user. Is the issue because Bob bootstraps with DeveloperReadWrite SSO user and then when James deploys he has a different user signature so that bootstrapped bucket doesn't like him?

Our permission set for the developers includes "sts:*", so not sure what else is required to make it be trusted, and assume a role.

jgrillo-grapl added a commit to grapl-security/grapl that referenced this issue Jul 28, 2020
* Internal 9 fix workflow (#81)

* trigger on prereleases as well

* actions/checkout@v2 should checkout the correct branch anyway?

* bash

* fix tagged builds

* add graphql-endpoint to build scripts

* fix graphql endpoint Dockerfile

* docker cp

* zip arguments reversed >_<

* Add generic subgraph generator to build

* Build generic-subgraph-generator

* Add Dockerfile for generic-subgraph-generator

* Internal 9 fix workflow (#83)

* trigger on prereleases as well

* actions/checkout@v2 should checkout the correct branch anyway?

* bash

* fix tagged builds

* add graphql-endpoint to build scripts

* fix graphql endpoint Dockerfile

* docker cp

* zip arguments reversed >_<

* python lambda.zips are located at /home/grapl

* Internal 9 fix workflow (#84)

* trigger on prereleases as well

* actions/checkout@v2 should checkout the correct branch anyway?

* bash

* fix tagged builds

* add graphql-endpoint to build scripts

* fix graphql endpoint Dockerfile

* docker cp

* zip arguments reversed >_<

* python lambda.zips are located at /home/grapl

* use step outputs for artifact names

* New CDK.

* sneaky entry in parent .gitignore left the most important subdir out!

* Add grapl-notebook

* Restoring edge_ux - oops.

* Clean and rebuild of the edge_ux, as well as fixing graphql endpoint
string replacement.

* prefer arrow function.

* Bump service queue size from 1 to 10.

* Fixing perms granted by `allowReadWrite` to also grant write perms.

* Reduce console logs in schema.js

* Community docs (#85)

* initial attempt at community docs

* document branching strategy

* add link to Dobi issue

* Add script to download release zips by tag.

* Update to CDK 1.41.0.

* Update deploy_all.sh to reflect new stack name for UX.

* add grapl_analyzerlib to monorepo and fix rust containers

* only rename the binaries in the lambdas

* Add S3 encryption. Make fetch_Zips a bit more resilient.

* Move JWT to secretsmanager

* Remove unused import

* Publish grapl_graph_descriptions and grapl_analyzerlib to PyPI (#91)

* publish grapl_graph_descriptions and grapl_analyzerlib to PyPI

* fix a few mistakes

* remove some stuff that was erroneously brought in with grapl_analyzerlib

* run mypy in lint workflow

* fix a bug in the release workflow

* Fix release workflow (#92)

* publish grapl_graph_descriptions and grapl_analyzerlib to PyPI

* fix a few mistakes

* remove some stuff that was erroneously brought in with grapl_analyzerlib

* run mypy in lint workflow

* fix a bug in the release workflow

* fix workflow syntax

* whitespace

* Fix the grapl-release workflow (#93)

* publish grapl_graph_descriptions and grapl_analyzerlib to PyPI

* fix a few mistakes

* remove some stuff that was erroneously brought in with grapl_analyzerlib

* run mypy in lint workflow

* fix a bug in the release workflow

* fix workflow syntax

* whitespace

* what?

* whoops (#94)

* Internal 18 fix release workflow (#95)

* whoops

* >_<

* Fix pypa/gh-action-pypi-publish version (#96)

* whoops

* >_<

* use correct version for pypa/gh-action-pypi-publish@v1.1.0

* Add rate limiting

* Remove unused import

* Valid emails for PyPI (#97)

* valid emails for PyPI

* update grapl-build and grapl-lint workflows to run on PR updates, run cargo update

* run lints and builds on the staging branch

* listen to synchronized events instead of edited

* set the working-directory before attempting cargo audit

* synchronize, not synchronized >_<

* update working-directory

* apparently working-directory does not work with uses

* make cargo audit actually work

* make cargo audit its own job s.t. it shows up as a separate check

* attempt to fix the grapl-cargo-audit job

* Internal 18 more release workflow fixes (#99)

* valid emails for PyPI

* update grapl-build and grapl-lint workflows to run on PR updates, run cargo update

* run lints and builds on the staging branch

* listen to synchronized events instead of edited

* set the working-directory before attempting cargo audit

* synchronize, not synchronized >_<

* update working-directory

* apparently working-directory does not work with uses

* make cargo audit actually work

* make cargo audit its own job s.t. it shows up as a separate check

* attempt to fix the grapl-cargo-audit job

* fix utility service builds

* add another depends_on (#100)

* Don't create a stack just for the secret. Use arn, not name.

* Fixing path join in fetch script.

* Remove engagement-graph from grapl_analyzerlib

* Bump versions for grapl_analyzerlib

* Use bump2version for python library releases (#101)

* version bumps for python libraries

* remove unreachable exceptions

* python3 (#102)

* Configure python correctly (#103)

* python3

* configure python correctly

* Check whether versions need to be bumped before building (#104)

* python3

* configure python correctly

* WIP so close

* exclude yanked packages

* some updates and fixes

* Remove .bumpversion.cfg and increment versions (#105)

* python3

* configure python correctly

* WIP so close

* exclude yanked packages

* some updates and fixes

* bump versions, remove .bumpversion.cfgs

* Complete removal of EngagementGraph

* Fix merge conflict

* Add caching to Dockerfile

* Black and rustfmt (#106)

* black

* rustfmt

* add a newline to trigger git checks

* cargo update

* run black once last time

* build in debian images instead of alpine (#108)

* bump  python library versions (#109)

* build in debian images instead of alpine

* bump python lib versions

* add PyPI checks to lint workflow

* - Adding version info as description to Lambdas.
- Naming cleanup.
- Pinning dgraph version. This should have been in a previous commit.

* DGraph - don't use standalone version in AWS.

* Fix rust builds (#110)

* build in debian images instead of alpine

* bump python lib versions

* add PyPI checks to lint workflow

* try to reduce peak disk usage for 10GB Github Actions limit

* Removing a file created from merge that shouldn't have made it in
previous commit.

* Move docs

* - Cleanup SageMaker Notebook name.

* - Remove encryption on UX bucket - this needs to be public.

* build in debian images instead of alpine (#108)

* bump  python library versions (#109)

* build in debian images instead of alpine

* bump python lib versions

* add PyPI checks to lint workflow

* Fix rust builds (#110)

* build in debian images instead of alpine

* bump python lib versions

* add PyPI checks to lint workflow

* try to reduce peak disk usage for 10GB Github Actions limit

* Renaming and syncing queue names in cdk with rest of the codebase.

* Fixing Python builds to work in AWS.

* Python and Rust formatting.

* Fix LGTM alert, unused import.

* Update sqs-lambda

* Add waiting for s3 and sqs

* Add aws_region to events

* Add ip_address to OutboundConnection

* - Adding version info as description to Lambdas.
- Naming cleanup.
- Pinning dgraph version. This should have been in a previous commit.

* DGraph - don't use standalone version in AWS.

* Removing a file created from merge that shouldn't have made it in
previous commit.

* - Cleanup SageMaker Notebook name.

* - Remove encryption on UX bucket - this needs to be public.

* Renaming and syncing queue names in cdk with rest of the codebase.

* Fixing Python builds to work in AWS.

* Python and Rust formatting.

* Fix LGTM alert, unused import.

* Remove engagement graph from model plugin deployer.

* Bump graph generator lib version

* Bump graph generator lib version

* Move grapl-config and grapl-graph-descriptions to crates.io

* Add license and description

* Add license and description

* Update Cargo.toml

* Update Cargo.toml

* Slip in a change to fix logging

* Update dependencies

* Syncing queue names.

* Fix unused import reported by LGTM.

* Attach reverse edges in DGraph (#114)

* implement reverse edges

* some bug fixes

* remove unused local

* remove some more engagement graph stuff

* add some debug logging

* Set up reverse edges, remove types

* fix logging

* put EngagementClient and EngagementView back in

* update for review comments

Co-authored-by: colin-grapl <62314572+colin-grapl@users.noreply.github.com>
Co-authored-by: colin <colin@graplsecurity.com>

* Adding ability for generator lib s3 client to assume AWS STS role.

* internal-13-ui-updates (#118)

* fixed compiler warnings

* Start dashboard

* completed dashboard, page not found, extracted header, & routing

* Add file handler

* able to upload & delete plugins with UX changes

* reformatted using black

* lgtm changes

* Remove hanging reference to engagement_graph

* update version, fix encoding error

Co-authored-by: colin <colin@graplsecurity.com>

* Implement DGraph TTL cleanup job (#119)

* basic ttl job structure

* small fix

* small fixes

* code cleanup

* write batch query

* updates for review comments

* WIP but close now

* some updates from local testing

* updates from local testing

* make debug logs a little less spammy

* add grapl-dgraph-ttl to local grapl

* some more updates from local testing

* CDK for grapl-dgraph-ttl (#121)

* add --delay and --batch-size options to upload-sysmon-logs.py

* attempt to CDK

* add schedule rule

* updates for review comments

* address LGTM lint

* cdk README

* add extract-grapl-deployment-artifacts.sh

* WIP build is broken

* fix aws-cdk versions

* some updates from local testing

* WIP build still borked

* README update

* make dgraph-ttl a Construct instead of a NestedStack

* fix an issue with the handler reference

* some more updates from testing in sandbox

* more fixes

* normalize dgraph dns name handling (#126)

* Upgrade aws cdk to 1.46 to take advantage of fix for aws/aws-cdk#3463

* - Adding deploy name to support deploying multiple instances of Grapl to a
single AWS account.
- All Constructs now use an interface for constructor params beyond
scope and id. This essentially provides named params, making it clearer
what the arguments are for.

* Remove .env from .gitignore, env vars no longer used.

* Fix Grapl prefix from stackName issue.

* Fix EngagementUX Stack name.

* unused imports cleanup.

* Fix test code from template to new Grapl Stack params.

* Fixing some hard-coded names to take a prefix.

* Fix name for retry handler.

* Fixing hard-coded table names, which now take from env vars.

Also gonna mix up our naming convention. For fun.

* jkz colin

* Add asset_id_mappings table and remove node_id_retry_table.

* Add new env vars to docker-compose.yml

* Remove swp file that should not be there.

* Moving dynamodb table name fetches for node-identifier to grapl-config.

* Adding MG_ALPHAS to docker-compose for dgraph-ttl.

* Integrate cdk-watchful (#124)

* whitespace

* add cdk-watchful dep

* vendor cdk-watchful 0.5.1

* get vendored cdk-watchful to build

* add watchful to the Grapl stack

* some fixes

* remove cdk-watchful DynamoDB stuff because we use on-demand pricing

* remove unused import

* instrument engagement and graphql

* updates from local testing

* make everything a NestedStack

* internal-74 fix destroy_all.sh

* remove dangerous S3 stuff

* Updating CDK README to match changes from previous PR.

* Routing (#131)

* added paging for graphQL endpoint

* added front-end pagination to plugins

* graphQL paging

* fixed duplicate pagination bug for lenses

* added styling for paging

* formatting

* Revert extra lenses

* Styling

* fixed any types

* revisions

* refactored custom routing to use react-router-dom HashRouter

* remove redirectTo function

* added logic to check if logged in

* removed unneeded props

* Improve performance of plugin page and fix warnings

* Update sqs-lambda

* Update sqs-lambda

* Bump versions for tokio, tokio-compat, other dependencies

* Making the use of Watchful conditional on spec of email for watchful
alerts.

* Let's not forget to pass this new param to the Grapl Stack.

* Update graph-generator-lib

* Glob deps

* Properly pass in UX Bucket URL to graphql endpoint

* Fixing previous watcher commit.

* Move ux bucket to grapl-cdk-stack

* Fix import

* Fix .gitignore in CDK to not ignore the JS in our UX.

* modelPluginDeployer cdk hack

* Lowercase the origin in engagement edge

* Reformat

* Fix endpoint URL

* Use regional bucket domain with https

* Fix some more issues related to cors, env vars, etc

* Format engagement_edge

* Only use one client per service

* Reuse asset id db

* Formatting via Prettier. Adding .prettierrc.toml.

* Fixing bucket policies, which had a few issues:
  - ActionGetObject isn't a thing
  - resources for s3:GebObject action need path wildcard
  - extra perms in TODO note

* no sids.

* policy statement cleanup.

* Create the Lmabda execution role so we can name it. Also rename
publishesToBucket to writesToBucket.

* Adding explicit lambda execution roles with leigble names to ModelPluginDeployer
and DGraphTTL.

* Add description text to CDK stacks.

* Use dobi for builds (#146)

* WIP

* tag build images latest

* replace docker-compose build with dobi

* mark dobi-linux binary executable

* split builds into separate jobs

* small fix for release workflow

* RefactorBugfix

* Fixes a number of issues with localgrapl, and improves logging

* Temporary fix for cors in python services. grapl_analyzerlib plugin retriever fix. Notebook ACL fix.

* Fix acl for list of s3

* Improve error in dispatcher. Fix ACL

* Use addResources

* Fix a couple things:
  - pass new arg
  - change params to match scheme of others (extending Props)

* Don't unwrap a failed DGraph upsert

* Discard transaction on failure

* Remove commented out code

* Reformat python

* fix py formatting issues.

* Reformat python

* Reformat python

* Handle redeployment of plugins more gracefully

* Update Dockerfile to cd before zip

* It's a - not a _

* Fix underscores

* Give Engagement Notebook role a ligble name and a description.

* Fix a number of issues with the analyzer-executor

* Revert prefix

* Bump the grapl_analyzerlib VERSION

* Add paging to get_lenses (#125)

* added paging for graphQL endpoint

* added front-end pagination to plugins

* graphQL paging

* fixed duplicate pagination bug for lenses

* added styling for paging

* formatting

* Revert extra lenses

* Styling

* fixed any types

* revisions

* Reload page when login is successful and history changes. (#148)

* added location.reload() when history changes

* removed console.logs

* removed console.log

* Internal 135 error handling (#164)

* added validation

* added extra rows to plugin listing to prevent UI from jumping

* added yup package

* add back local_handlers for graph-merger and analyzer-dispatcher

* staging changes

* Tests in CI (#153)

* move assetdb and sessiondb tests under integration feature flag

* run rust tests in CI

* fix dobi configs for rust deploy images

* run python unit tests in builds

* run tests in js builds

* fix js tests

* WIP tests run but they fail

* WIP

* python integration tests pass!

* make rust integration tests run

* make rust integration tests work and wire tests into github workflows

* workflow improvements

* use cached build artifacts in integration tests

* remove bogus hypothesis examples

* Internal 37 fix workflows (#168)

* move assetdb and sessiondb tests under integration feature flag

* run rust tests in CI

* fix dobi configs for rust deploy images

* run python unit tests in builds

* run tests in js builds

* fix js tests

* WIP tests run but they fail

* WIP

* python integration tests pass!

* make rust integration tests run

* make rust integration tests work and wire tests into github workflows

* workflow improvements

* use cached build artifacts in integration tests

* remove bogus hypothesis examples

* fix failing release build

* fix build

* some fixes (#171)

* some fixes

* 😭

* Fix debug builds

* Remove test build step

* another release workflow fix (#175)

* Internal 82 local auth (#173)

* fixed bugs with local secretsmanager

* fixed dashboard buttons, bug fixes for local auth

* added UX_BUCKET_URL

* added missing parameter to GraphQLEndpoint for ux_bucket per Colin

* added dynamodb tables & service to graplprovision for local-grapl-user_auth

* removed links to documentation

* added welcome back

* debugging for local auth

* removed print statement

* check for print statement in graph-merger

* merge with staging

* cargo formatting

* python formatting

* run black for python formatting

* PR comment edits

* update documentation with localgrapl creds, python formatting

* reformatting

* fml (#177)

* Internal 37 moar workflow fixes (#178)

* fml

* another fix

* Bump sqs-lambda to 0.20.20

* Internal 172 canvas bug (#180)

* graphql query fixes

* extended DGraph TTL to 13 months

* remove console.log

* cargo update

* revert cargo update.

* Implement Github Actions caching (#170)

* WIP -- begin implementing gh actions caching

* WIP

* WIP

* run unit tests in dobi

* WIP workflows

* update workflows for new jobs

* add caching to build workflow

* fix workflow

* yaml syntax

* fix build workflow

* chicken or egg?

* copy pasta

* apparently artifacts are per-workflow

* remove upload/download

* fix build workflow

* another fix

Co-authored-by: inickles-grapl <64668260+inickles-grapl@users.noreply.github.com>

* Fix release workflow (#182)

* WIP -- begin implementing gh actions caching

* WIP

* WIP

* run unit tests in dobi

* WIP workflows

* update workflows for new jobs

* add caching to build workflow

* fix workflow

* yaml syntax

* fix build workflow

* chicken or egg?

* copy pasta

* apparently artifacts are per-workflow

* remove upload/download

* fix build workflow

* another fix

* release workflow updates

Co-authored-by: inickles-grapl <64668260+inickles-grapl@users.noreply.github.com>

* fix release flow (#185)

* fix release flow

* wat

* >_<

* Fix merge conflict (#187)

* Update grapl_analyzerlib

* Create CODE_OF_CONDUCT.md

* version bumps

* add newline for teh lulz

Co-authored-by: colin <colin@graplsecurity.com>
Co-authored-by: andrea-grapl <64504029+andrea-grapl@users.noreply.github.com>
Co-authored-by: colin-grapl <62314572+colin-grapl@users.noreply.github.com>

* add check-pypi job to build workflow (#188)

* V0.2.0 docs update (#189)

* update CONTRIBUTING.md for v0.2.0 release

* README.md reformat and some updates

* add Grapl DFIR Slack invite link to CONTRIBUTING.md

* move check-pypi job to the lint workflow

* two more README.md updates

Co-authored-by: colin <colin@graplsecurity.com>
Co-authored-by: inickles-grapl <inickles@graplsecurity.com>
Co-authored-by: colin-grapl <62314572+colin-grapl@users.noreply.github.com>
Co-authored-by: inickles-grapl <64668260+inickles-grapl@users.noreply.github.com>
Co-authored-by: andrea-grapl <64504029+andrea-grapl@users.noreply.github.com>
@eladb eladb assigned rix0rrr and unassigned eladb Aug 17, 2020
@eladb eladb added the p1 label Aug 17, 2020
@ghost
Copy link

ghost commented Aug 25, 2020

After running the new bootstrap command, I'm getting the following error when doing cdk deploy

Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): Roles may not be assumed by root accounts.

Any idea what I'm doing wrong?

EDIT: It was due to using root credentials in our CLI. Creating a IAM User resolved the issue.

I'm using IAM user, but still has same error with you. Do I need new an IAM user in CLI?

@ajsivsan
Copy link

ajsivsan commented Oct 5, 2020

Updated the CDK to use the new bootstrap and synth still shows 63 parameters. I'm uploading a bunch of files Assets and referring it using s3_url. Does the new bootstrap work for S3 Asset class as well?

@hsks
Copy link

hsks commented Oct 13, 2020

For anyone who is struggling, here's how we made it work in our case:
We followed @rix0rrr 's suggestion and that helped mostly (thanks for your valuable suggestion). However, we faced some minor issues even after that.
After bootstrapping the environments successfully, we still hit the parameter limit. However, a quick update of the cdk CLI to the latest version (1.67.0) in our case resolved that limit.
Moreover, we wanted to deploy using a user other than the administrator. Cloudformation throws self explanatory errors in such cases. As a fix, we need to add the deployment user under consideration as a trusted entity to some of the roles created by cdk after bootstrapping the environments (these roles are clear for the cloudformation error messages).

Doing so made it work perfectly.

@jshaw-vf
Copy link

For anyone who is struggling, here's how we made it work in our case:
We followed @rix0rrr 's suggestion and that helped mostly (thanks for your valuable suggestion). However, we faced some minor issues even after that.
After bootstrapping the environments successfully, we still hit the parameter limit. However, a quick update of the cdk CLI to the latest version (1.67.0) in our case resolved that limit.
Moreover, we wanted to deploy using a user other than the administrator. Cloudformation throws self explanatory errors in such cases. As a fix, we need to add the deployment user under consideration as a trusted entity to some of the roles created by cdk after bootstrapping the environments (these roles are clear for the cloudformation error messages).

Doing so made it work perfectly.

Nice, our team solved this the same way and ran into similar issues. It would be nice to have a list of the exact permissions a role would need to have least privilege access to make this work.

@hsks
Copy link

hsks commented Oct 13, 2020

For anyone who is struggling, here's how we made it work in our case:
We followed @rix0rrr 's suggestion and that helped mostly (thanks for your valuable suggestion). However, we faced some minor issues even after that.
After bootstrapping the environments successfully, we still hit the parameter limit. However, a quick update of the cdk CLI to the latest version (1.67.0) in our case resolved that limit.
Moreover, we wanted to deploy using a user other than the administrator. Cloudformation throws self explanatory errors in such cases. As a fix, we need to add the deployment user under consideration as a trusted entity to some of the roles created by cdk after bootstrapping the environments (these roles are clear for the cloudformation error messages).
Doing so made it work perfectly.

Nice, our team solved this the same way and ran into similar issues. It would be nice to have a list of the exact permissions a role would need to have least privilege access to make this work.

I completely agree. We usually wait for deployments to break before modifying the trust relationships. Helps keep the permission boundary as intact as possible.
Hopefully folks at AWS will document this soon.

@MinCheTsai
Copy link

The new asset system is available properly starting at 1.45.0.

Easiest way to enable it is to put this into your cdk.json:

{
  "context": {
    "@aws-cdk/core:newStyleStackSynthesis": true
  }
}

You also have to bootstrap any environments you want to deploy into using the new version of the bootstrap stack:

$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
  --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
  aws://123456789012/us-east-2

This will create a new S3 Bucket and ECR Repository with a predictable name, which we will reference directly from the template without the use of asset parameters.

I have used this solution in our projects, it's work! [CDK v1.92.0]

@rix0rrr @eladb I have a maintenance question.
This solution seem hasn't public to anyone, I careful it will use for long term project, can I keep and continue upgrade CDK version in the feature? The project stacks can't create again(in production).

@henrist
Copy link

henrist commented Apr 14, 2021

CloudFormation increased the limit from 60 parameters to 200 parameteres in October (https://aws.amazon.com/about-aws/whats-new/2020/10/aws-cloudformation-now-supports-increased-limits-on-five-service-quotas/)

@eladb
Copy link
Contributor

eladb commented May 13, 2021

This is solved with the new bootstrap stack which is now the default in v2 - I believe this can be closed. Feel free to add additional comments if anyone feels differently.

@eladb eladb closed this as completed May 13, 2021
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/assets Related to the @aws-cdk/assets package @aws-cdk/aws-cloudformation Related to AWS CloudFormation @aws-cdk/core Related to core CDK functionality effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. in-progress This issue is being actively worked on. p1
Projects
None yet
Development

No branches or pull requests