Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aws-cloudfront] logging bucket defined using url and not bucket name #10512

Closed
jtaylor-foodhub opened this issue Sep 24, 2020 · 3 comments · Fixed by #10570
Closed

[aws-cloudfront] logging bucket defined using url and not bucket name #10512

jtaylor-foodhub opened this issue Sep 24, 2020 · 3 comments · Fixed by #10570
Assignees
Labels
@aws-cdk/aws-cloudfront Related to Amazon CloudFront bug This issue is a bug. effort/small Small work item – less than a day of effort in-progress This issue is being actively worked on. p2

Comments

@jtaylor-foodhub
Copy link

jtaylor-foodhub commented Sep 24, 2020

When deploying a CloudFrontWebDistribution with an existing bucket, the deployed configuration's Logging.Bucket is the FQDN of the bucket and not the bucket name.

Reproduction Steps

interface Props {
  logBucket: IBucket;
}

...

     this.cloudfrontDistro = new CloudFrontWebDistribution(this, "distro", {
        ...
        loggingConfig: {
          bucket: props.logBucket,
          includeCookies: true,
          prefix: `dev/cloudfront/`,
        },
        originConfigs: [
          ...
      });
    }

What did you expect to happen?

Logging bucket is specified by name:

aws cloudfront get-distribution --id XXXXXXXXXX
...
            "Logging": {
                "Enabled": true,
                "IncludeCookies": true,
                "Bucket": "my-bucket-name",
                "Prefix": "dev/cloudfront/"
            },
...

What actually happened?

Logging bucket is specified by name:

aws cloudfront get-distribution --id XXXXXXXXXX
...
            "Logging": {
                "Enabled": true,
                "IncludeCookies": true,
                "Bucket": "my-bucket-name.s3.eu-west-2.amazonaws.com",
                "Prefix": "dev/cloudfront/"
            },
...

In addition, logs never appear in this bucket. Even after a few days.

Environment

  • CLI Version : 1.63.0 (build 7a68125)
  • Framework Version:
  • Node.js Version: v12.14.1
  • OS : MacOS 10.15.6 (19G2021)
  • Language (Version): TypeScript (3.9.7)

Other


This is 🐛 Bug Report

@jtaylor-foodhub jtaylor-foodhub added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Sep 24, 2020
@github-actions github-actions bot added the @aws-cdk/aws-cloudfront Related to Amazon CloudFront label Sep 24, 2020
@jtaylor-foodhub
Copy link
Author

When I click through to the logging bucket from the distro in the console I get this...

Screenshot 2020-09-24 at 14 28 41

Screenshot 2020-09-24 at 14 26 56

@njlynch
Copy link
Contributor

njlynch commented Sep 28, 2020

Thanks for the bug report.

This actually seems a bit like a quirk in how the console(s) handle regional vs global S3 bucket domains, but it's a pretty easy (and non-invasive) change to fix.

Per the CloudFormation docs, the Bucket property should be:

The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com.

This is the domain name of the bucket, whereas the CloudFront CDK constructs today are using the regional domain name. Logging still works with this configuration, but this new CloudFront Logs section of the console isn't handling the link well. For your last screenshot, if you go to https://console.aws.amazon.com/s3/buckets/_REDACTED_/?region=us-east-1 instead of https://console.aws.amazon.com/s3/buckets/_REDACTED_.eu-west-2.amazonaws.com/?region=us-east-1, you should see the logs. I was able to reproduce this with a distribution, where I saw the link from the CloudFront console go to an unknown bucket/bad data, but dropping the suffix showed the correct bucket with the logs present.

So while everything's technically working here, it does seem CloudFront is expecting the global -- not the regional -- bucket domain, and fixing it will make the console experience a bit smoother.

@njlynch njlynch added effort/small Small work item – less than a day of effort p2 and removed needs-triage This issue or PR still needs to be triaged. labels Sep 28, 2020
njlynch added a commit that referenced this issue Sep 28, 2020
According to the CloudFront docs, the logging bucket should be specified as the
bucket domain name. #2554 updated origin buckets to use the regional bucket
domain names -- which is correct -- but also incorrectly updated the logging
bucket specifications as well.

This has a minor impact of being unable to navigate to the logging bucket from
the CloudFront console, but otherwise the logs are stored correctly.

fixes #10512
@njlynch njlynch added the in-progress This issue is being actively worked on. label Sep 28, 2020
@mergify mergify bot closed this as completed in #10570 Sep 29, 2020
mergify bot pushed a commit that referenced this issue Sep 29, 2020
According to the CloudFront docs, the logging bucket should be specified as the
bucket domain name. #2554 updated origin buckets to use the regional bucket
domain names -- which is correct -- but also incorrectly updated the logging
bucket specifications as well.

This has a minor impact of being unable to navigate to the logging bucket from
the CloudFront console, but otherwise the logs are stored correctly.

fixes #10512


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@zxkane
Copy link
Contributor

zxkane commented Oct 17, 2020

Looks like it’s not all partitions CloudFront allowing the global s3 domain name. My CDK app met below error after upgrading to 1.68.0 in China region(cn-northwest-1),

4:04:39 PM | UPDATE_FAILED        | AWS::CloudFront::Distribution                   | CloudFrontDist/CFDistribution
The parameter Logging Bucket is invalid. The bucket is in a Region that you cannot send logs to with this distribution. (Service: AmazonCloudF
ront; Status Code: 400;

The fix introduces a regression bug to break the CloudFront distribution with logging bucket.

Thanks for the bug report.

This actually seems a bit like a quirk in how the console(s) handle regional vs global S3 bucket domains, but it's a pretty easy (and non-invasive) change to fix.

Per the CloudFormation docs, the Bucket property should be:

The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com.

This is the domain name of the bucket, whereas the CloudFront CDK constructs today are using the regional domain name. Logging still works with this configuration, but this new CloudFront Logs section of the console isn't handling the link well. For your last screenshot, if you go to https://console.aws.amazon.com/s3/buckets/_REDACTED_/?region=us-east-1 instead of https://console.aws.amazon.com/s3/buckets/_REDACTED_.eu-west-2.amazonaws.com/?region=us-east-1, you should see the logs. I was able to reproduce this with a distribution, where I saw the link from the CloudFront console go to an unknown bucket/bad data, but dropping the suffix showed the correct bucket with the logs present.

So while everything's technically working here, it does seem CloudFront is expecting the global -- not the regional -- bucket domain, and fixing it will make the console experience a bit smoother.

njlynch added a commit that referenced this issue Oct 19, 2020
This reverts #10512. The logging buckets were originally using the regional
domain names, but this caused odd behavior with CloudFront's new console "Logs"
experience. #10512 switched logging buckets to use the global domain name, which
addressed the console issue but broke customers in CN regions.

We will follow up internally to improve the CloudFront console issue.

fixes #10923
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-cloudfront Related to Amazon CloudFront bug This issue is a bug. effort/small Small work item – less than a day of effort in-progress This issue is being actively worked on. p2
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants