Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge #24

Merged
merged 44 commits into from
Sep 16, 2020
Merged

Merge #24

merged 44 commits into from
Sep 16, 2020

Conversation

flemjame-at-amazon
Copy link
Owner


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license

yerzhan7 and others added 30 commits September 10, 2020 09:25
…rget group (#3703) (#10205)


Add the following Health Check validations for Network Target Group:
 - `healthyThresholdCount` and `unhealthyThresholdCount` must the same
 - `healthyThresholdCount` and `unhealthyThresholdCount` must be between 2 and 10

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Added support for using `credential_source` in the standard aws config file.

This wasn't previously supported because the JavaScript SDK does [not](aws/aws-sdk-js#1916) support it. 

This PR bypasses the limitation.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Pipeline structure looks like this:

```
┌────────┐        ┌────────┐        ┌──────────┐
│        │        │        │        │          │
│  src   │───────▶│ synth  │───────▶│selfupdate│─────▶...(deploy)...
│        │        │        │        │          │
└────────┘        └────────┘        └──────────┘
```

When changing the pipeline structure, we expect it to restart (and it
does).

However, when changing something about the "synth" action (such as
adding tests to the list of commands), we would ALSO expect the pipeline to restart.
This would ensure that if we add tests, the tests get executed and
must pass before we start the actual deployments. That is, we'd like
to run the tests on the CURRENT commit as well, but by the time
we update the buildspec to run the tests, we've already passed the stage
where the tests would be run.

This is because doing something like adding
a test didn't actually change the *Pipeline structure* (causing
a restart). Instead, it would change the buildspec of the
CodeBuild Project the pipeline was referencing, leaving the pipeline
unchanged and ready to continue deploying. The tests would
only be executed for FUTURE commits, leaving the current commit
untested.

The solution is a hash of the buildspec somewhere, but where?

We could have put it in the CodeBuild Project's logical id, so that we
would have torn down and created a new CodeBuild project to ensure that
the pipeline would restart. However, doing so would prevent giving the
CodeBuild project a deterministic display name (replacement would fail).
Plus, a simpler and cheaper solution is to put a hash of the buildspec
in the Action that references the CodeBuild project.

There is exactly one field where we can stash some user-specified
data that doesn't impact the build result: `EnvironmentVariables`.
We add an environment variable with an unlikely name, which is not
used by the build itself. Its sole purpose is to make the pipeline
definition change when the buildspec changes, causing a pipeline
restart.

Fixes #9458.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When `AWS_REGIONS=region1,region2,region3` is set, the CLI integration
tests are going to parallelize themselves across those regions.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Currently, the HttpsRedirect construct provisions an IPv6-enabled
CloudFront distribution, but only wires up an A (IPv4) alias record to
the distribution in Route 53.

Add an AAAA (IPv6) alias record to directly support this traffic.

See:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The OAI gets a comment that includes the name of the origin it is attached to.
With longer distribution and origin names, this comment can overflow the
(undocumented) 128-character limit. This fix truncates the comment.

I also updated `oai.test.ts` from using the `nodeunitshim` to actually be
written as Jest tests.

fixes #10211


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Integ tests are failing with:

```console
Error: Cannot find module '@aws-cdk/core'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.Module._load (internal/modules/cjs/loader.js:562:25)
    at Module.require (internal/modules/cjs/loader.js:692:17)
    at require (internal/modules/cjs/helpers.js:25:18)
    at Object.<anonymous> (/tmp/cdk-integ-0gjmfcdff6x/app.js:2:13)
    at Module._compile (internal/modules/cjs/loader.js:778:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
    at Function.Module._load (internal/modules/cjs/loader.js:585:3)
Subprocess exited with error 1
❌ Error: 'cdk bootstrap aws://1234567890/us-west-2' exited with error code 1
```

This is because we try to bootstrap before installing the necessary modules. Since our app has a `cdk.json` file and we are running inside that directory, `cdk bootstrap` tries to synth the app to detect environments, and fails.

Just install all the modules before bootstrap.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
closes #10271 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…in dependent Stacks (#10299)

We have logic in KMS Key that checks whether the grantee is from a Stack that depends on the Key's Stack.
It's required because KMS validated that the principals contained in its Key policy actually exist,
and fails if they don't, so in that case, we switch to using the root principal instead.
However, that logic only makes sense for newly created resources;
for imported resources, like those with `Role.fromRoleArn`,
they already exist, so no need to make this switch.

Fixes  #10166.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…0291)

The `IDatabaseCluster` interface has a lot of properties, putting a large strain
on users who want to import clusters via `DatabaseClusterAttributes`. This
alters the attributes interface to contain almost exclusively optional props,
and lazily throw if any missing properties are accessed.

I believe this is a backwards-compatible change, but someone please double-check
me on that.

fixes #3587


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Closes #10277

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
)

Support `Enum Types` for code-first approach. `Enum Types` are special types of Intermediate Types in CDK.

<details>
<summary> Desired GraphQL Enum</summary>

```gql
enum Episode {
  NEWHOPE
  EMPIRE
  JEDI
}
```

</details>

The above GraphQL Enumeration Type can be expressed in CDK as the following:

<details>
<summary>CDK Code</summary>

```ts
const episode = new appsync.EnumType('Episode', {
  definition: [
    'NEWHOPE',
    'EMPIRE',
    'JEDI',
  ],
}); 
api.addType(episode);
```

</details>

Discussion on the addField API can be found [here](#10023 (comment)). 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…neration (#10078)

Implemented an `addSubscription` function for the `appsync.Schema` class to make easy schema generation. 

```ts
api.addSubscription('addedFilm', new appsync.ResolvableField({
  returnType: film.attribute(),
  args: { id: appsync.GraphqlType.id({ isRequired: true }) },
  directive: [appsync.Directive.subscribe('addFilm')],
}));
```
Fixes: #9345 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
As discussed during our @aws-cdk/cloudformation-include API review.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Bumps [node-fetch](https://github.com/bitinn/node-fetch) from 2.6.0 to 2.6.1.
- [Release notes](https://github.com/bitinn/node-fetch/releases)
- [Changelog](https://github.com/node-fetch/node-fetch/blob/master/docs/CHANGELOG.md)
- [Commits](node-fetch/node-fetch@v2.6.0...v2.6.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Adds the ability to import secrets by name, including without the SecretsManager
assigned suffix. As long as a secret with the same name has been created in each
region with the same name, this allows for the same `fromSecretName` usage in
stacks across regions.

Oddly enough, most CloudFormation templates that take references to secrets
accept either the full-form ARN, including the suffix or just the base secret
name (not in ARN format). The one place where a full ARN format is needed is in
IAM policy statements, where the wildcard is necessary to account for the
suffix.

Tested this manually against an existing secret with a CodeBuild project; per
the CloudFormation docs, this should work equally well with other
SecretsManager-integrated services.

fixes #7444
fixes #7949
fixes #7994
…#10337)

`--cloudformation-execution-policies` was always required for
bootstrapping to be useful, but it was only checked for presence if
you were adding cross-account trust.

People keep making mistakes against forgetting it though.

As a stopgap until we get a better UX in place, check that the
value is present.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
If there are credentials but using them fails, the "default account
lookup" that's used to populate the environment variables fails
(should have returned `undefined`) and the CLI aborts before the CDK
app is synth'ed.

Fixes #7849.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR will implicitly select correct bottlerocket variant in consist with the  cluster k8s version. For example, if the cluster k8s version is `1.17`, previously only bottlerocket `aws-k8s-1.15` variant is available. Now the variant will be `aws-k8s-1.17`

BREAKING CHANGE: Clusters previously running k8s version other than `1.15` and bottlerocket AMI(`aws-k8s-1.15` variant) will trigger AMI and node replacement.

Closes: #10188 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add option to supply a custom bundling image.

Closes #10194


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…s are used (#10339)

@eladb While working on this for the `aws-eks-next` module, i've started to second guess whether we really need to force cluster replacement.

This little PR fixes all the circular dependency issues as well as corrects the `cluster.connections` to include the EKS managed cluster security group. **It does not require cluster replacement**.

Merging this PR will leave two quirks/limitations:

- The cluster will have a redundant control plane security group, just like it has now. While redundant, it doesn't hurt, and doesn't really justify a cluster replacement. 

- One cluster per stack, just like we have now. I think we've already agreed that this limitation also does not justify cluster replacement, and the only reason we wanted to implement it is because we wanted to piggy back on the previous argument, which I now believe is not strong enough.

WDYT?

--------

This PR resolves a few problem related to circular dependencies when the cluster is configured with security groups from other stacks.

## Problem

Currently, the cluster creates a dedicated security group for the `KubectlProvider` and configures the main control plane security group to allow connections from it:

https://github.com/aws/aws-cdk/blob/f5c5d060ffee19ac3daa47cbf6df8d3563133433/packages/%40aws-cdk/aws-eks/lib/cluster.ts#L918-L924

This eventually creates ingress rules in the stack of control plane security group. These ingress rules refer to the `KubectlProvider` creates in the cluster stack, hence, a dependency if formed between the control plane security group stack, and the cluster stack.

`Control Plane Security Group Stack => Cluster Stack` (Control plane sg needs to allow connections from kubectl sg)

On the other hand, the cluster stack naturally depends on the control plane security group stack because the cluster needs the security group when its created. And so:

`Cluster Stack => Control Plane Security Group Stack` (Cluster needs the security group)
  
When these two stacks are different, a circular dependency is created.

## Solution

Do not create a security group for the `KubectlProvider` at all, and simply re-use the EKS managed security group. This security group is already configured to allow connections to the control plane, so if we simply attach it to the `KubectlProvider`, everything should work.

This avoids the creation of the first dependency direction.

Fixes #9754 
Fixes #10020 
Fixes #9536

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
To prevent future errors like #10343, add a linting rule that would
have caught that bug.

The use of this linting rule requires normalizing code in multiple
places, there's no wait to only positively check
for one case.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This workflow runs `npm-check-updates` throughout the repository, and runs
an equivalent of `yarn install --force` (equivalent to `yarn install && yarn upgrade`,
except in a single command so more efficient).

Some packages enjoy "special handling":
- `@types/node` will stay on the currently set *major* version (since this reflects
  the minimal supported `node` version)
- `@types/fs-extra` will stay on the currently set *major* version (because the
  latest major version is known to be broken with `@types/node >=10`, as seen
  in DefinitelyTyped/DefinitelyTyped#30398
- `typescript` will stay on the currently set *minor* version (since it does not
  follow semantic versioning, and *Major.Minor* is where the stability point is

This ensures all dependencies are up-to-date, on the latest version, including
transitive dependencies that may not have been updated by the `dependabot`
workflows previously.

This files a pull request once done if there are any changes. It requires a secret
called `AUTOMATION_GITHUB_TOKEN` that will need to be created, and contain
a GitHub token that has `workflows` permissions, so PR validation workflows will
trigger on the PR.
Change behavior to match the behavior of the AWS CLI. If `--profile` is
set, the given profile from the INI files will be used to the exclusion
of everything else.

If `--profile` is not set, the regular ambient credential process
is followed.

`AWS_PROFILE` is part of ambient credentials, and setting it does not
lead to triggering the new behavior. This is consistent with how the AWS
CLI behaves.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
reubinoff and others added 14 commits September 16, 2020 02:25
)

in the Pipeline example on how to create pipeline with stage and action for source. stage name has changed from name to stageName


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Enable users with existing cluster subnet groups to specify the group name,
rather than creating a new subnet group.

_Note:_ Marked as `exempt-readme` because I don't think this deserves its own
README section. Feel free to disagree.

fixes #9241


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Closes #10273

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…10223)

Fixes #10186 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
There is a problem with the `Parameters` property of the `AWS::SSM::Association` resource type.
Server-side, its type is `Map<String, List<String>>`.
Unfortunately, that type is impossible to represent in the current CloudFormation specification that our code generation is based on.

Add a patch that changes the `Parameters` property of `AWS::SSM::Association` to:

```json
    "AWS::SSM::Association": {
      "Properties": {
        "Parameters": {
          "Type": "Map",
          "ItemType": "List",
          "PrimitiveItemItemType": "String",
          "UpdateType": "Mutable",
          "Required": false
        }
      }
    }
```

That is not a legal CFN spec type.
But we add a new property type, `MapOfListsOfPrimitives`,
to the `cfnspec` module, that represents it.
We then modify our L1 code generation in `cfn2ts` to use the new type,
allowing `CfnAssociationProps.parameters` to have type `{ [key: string]: string[] }`.

Fixes #3092

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Missed passing the new required `--cloudformation-execution-policies`
argument in a couple of tests, and ignore the backwards compatibility
tests that don't pass it yet.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
 Support Logout URLs in UserPoolClient.

This is a pretty minimal change to just allow the property to be set.

fixes: #10225

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…es (#10327)

closes #10305

Add support for passing a physical resource id into the parameters
of an AwsCustomResource


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Continuation of #8526

This change adds a feature to SubnetSelection which lets the user provide objects that implements the new `ISubnetSelector` interface. These objects will be used by the VPC class to choose the subnets from the VPC.

This commit also provides an implementation of an ISubnetSelector that lets the user select subnets that contain IP addresses.

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…10389)

The new ClusterSubnetGroup L2 was added to the cluster in #10340.
However, adding this resource with an id of `Default` (to preserve backwards
compatibility), means that the implicit logic of `this.node.defaultChild`
breaks.

Flipping the order, so the ClusterSubnetGroup uses `Default` as the logical ID,
fixes both issues.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
A few packages were missing the maturity property.
modifying the description to match what we currently use when we create
libraries for new services.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…10373)

We erroneously assumed Conditions cannot be used in Outputs and Rules,
but we forgot that `Fn::If` actually uses the name of a Condition as the first argument.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
In CloudFormation, it's legal to pass as string where a boolean value is expected.
Take that into account when performing diffs,
and allow that case in `@aws-cdk/cloudformation-include`.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@flemjame-at-amazon flemjame-at-amazon merged commit af13454 into flemjame-at-amazon:master Sep 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.