-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: S3 uris are allowed capital letters #294
Conversation
S3 uris are allowed to have capital letters.
db4c42e
to
906df37
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I just popped in this PR and saw the unusual regex.
@@ -81,7 +81,7 @@ jobs: | |||
echo "session-name=$SESSION_NAME" >> $GITHUB_OUTPUT | |||
- name: validate | |||
env: | |||
REGEXP_S3_BUCKET: ^s3://[a-z0-9_/.-]+$ | |||
REGEXP_S3_BUCKET: ^s3://[a-zA-Z0-9_/.-]+$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since A-Z being added in, we can simply use \w
to represent a-zA-Z0-9_
Also, not sure how it worked, but the .
here means everything. And the /
there doesn't seem to be valid?
I think the regex should be \w\/\.-
? Or it's not actually a regex, just something similar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default the end point bucket needs to be lower case, e.g.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html
The following naming rules apply for directory buckets.
- Be unique within the chosen AWS Region and Availability Zone.
- Name must be between 3 (min) and 63 (max) characters long, including the suffix.
- Consists only of lowercase letters, numbers and hyphens (-).
- Begin and end with a letter or number.
- Must include the following suffix: --azid--x-s3.
- Bucket names must not start with the prefix xn--.
- Bucket names must not start with the prefix sthree-.
- Bucket names must not start with the prefix sthree-configurator.
- Bucket names must not start with the prefix amzn-s3-demo-.
- Bucket names must not end with the suffix -s3alias. This suffix is reserved for access point alias names. For more information, see Using a bucket-style alias for your S3 bucket access point.
- Bucket names must not end with the suffix --ol-s3. This suffix is reserved for Object Lambda Access Point alias names. For more information, see How to use a bucket-style alias for your S3 bucket Object Lambda Access Point.
- Bucket names must not end with the suffix .mrap. This suffix is reserved for Multi-Region Access Point names. For more information, see Rules for naming Amazon S3 Multi-Region Access Points.
For objects:
You can use any UTF-8 character in an object key name. However, using certain characters in key names can cause problems with some applications and protocols. The following guidelines help you maximize compliance with DNS, web-safe characters, XML parsers, and other APIs.
and then there's a list of recommendations:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the problem is that the workflow input s3-bucket
in practice can be bucket-only or full s3 URI. For example, from docker-gamit-base
:
fetch:
uses: GeoNet/Actions/.github/workflows/reusable-copy-to-s3.yml@main
with:
artifact-name: gamit
artifact-path: ./gg/
s3-bucket: s3://yum-prod.geonet.org.nz/docker-extras/docker-gamit/
cp-or-sync: cp
direction: from # 'to' or 'from'
the reusable s3 workflow uses aws s3
as the copy mechanism, so I think that it should be treated as a URI.
765f0bf
to
7b9a17d
Compare
s3-bucket is being used as a URI, not a bucket name
7b9a17d
to
803de8d
Compare
|
@CallumNZ |
To avoid issue of ' being interpreted
@junghao good catch. A |
No description provided.