Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Treat new revisions of ECS task definitions as updates instead of new resources #11506

Closed

Conversation

sworisbreathing
Copy link
Contributor

@sworisbreathing sworisbreathing commented Jan 7, 2020

Community Note

  • Please vote on this pull request by adding a 👍 reaction to the original pull request comment to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for pull request followers and do not help prioritize the request

Relates #632

Release note for CHANGELOG:

resource/aws_ecs_task_definition:
* Changing any attribute which requires a new revision of the task definition to be created will be reflected as an "update", and the old revision will no longer be marked as `INACTIVE`. As a consequence of this change, these resources now use the task `family` as their ID instead of the `arn`, and the Terraform resource will always read the latest `ACTIVE` revision from ECS.
* Task definitions can now be imported by `arn`, `family` or `family:revision`.

Output from acceptance testing:

$ make testacc TEST=./aws TESTARGS='-run=TestAccAWSEcsTaskDefinition_*'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./aws -v -count 1 -parallel 20 -run=TestAccAWSEcsTaskDefinition_* -timeout 120m
=== RUN   TestAccAWSEcsTaskDefinition_basic
=== PAUSE TestAccAWSEcsTaskDefinition_basic
=== RUN   TestAccAWSEcsTaskDefinition_withScratchVolume
=== PAUSE TestAccAWSEcsTaskDefinition_withScratchVolume
=== RUN   TestAccAWSEcsTaskDefinition_withDockerVolume
=== PAUSE TestAccAWSEcsTaskDefinition_withDockerVolume
=== RUN   TestAccAWSEcsTaskDefinition_withDockerVolumeMinimalConfig
=== PAUSE TestAccAWSEcsTaskDefinition_withDockerVolumeMinimalConfig
=== RUN   TestAccAWSEcsTaskDefinition_withTaskScopedDockerVolume
=== PAUSE TestAccAWSEcsTaskDefinition_withTaskScopedDockerVolume
=== RUN   TestAccAWSEcsTaskDefinition_withEcsService
=== PAUSE TestAccAWSEcsTaskDefinition_withEcsService
=== RUN   TestAccAWSEcsTaskDefinition_withTaskRoleArn
=== PAUSE TestAccAWSEcsTaskDefinition_withTaskRoleArn
=== RUN   TestAccAWSEcsTaskDefinition_withNetworkMode
=== PAUSE TestAccAWSEcsTaskDefinition_withNetworkMode
=== RUN   TestAccAWSEcsTaskDefinition_withIPCMode
=== PAUSE TestAccAWSEcsTaskDefinition_withIPCMode
=== RUN   TestAccAWSEcsTaskDefinition_withPidMode
=== PAUSE TestAccAWSEcsTaskDefinition_withPidMode
=== RUN   TestAccAWSEcsTaskDefinition_constraint
=== PAUSE TestAccAWSEcsTaskDefinition_constraint
=== RUN   TestAccAWSEcsTaskDefinition_changeVolumesForcesNewResource
=== PAUSE TestAccAWSEcsTaskDefinition_changeVolumesForcesNewResource
=== RUN   TestAccAWSEcsTaskDefinition_arrays
=== PAUSE TestAccAWSEcsTaskDefinition_arrays
=== RUN   TestAccAWSEcsTaskDefinition_Fargate
=== PAUSE TestAccAWSEcsTaskDefinition_Fargate
=== RUN   TestAccAWSEcsTaskDefinition_ExecutionRole
=== PAUSE TestAccAWSEcsTaskDefinition_ExecutionRole
=== RUN   TestAccAWSEcsTaskDefinition_Inactive
=== PAUSE TestAccAWSEcsTaskDefinition_Inactive
=== RUN   TestAccAWSEcsTaskDefinition_Tags
=== PAUSE TestAccAWSEcsTaskDefinition_Tags
=== RUN   TestAccAWSEcsTaskDefinition_ProxyConfiguration
=== PAUSE TestAccAWSEcsTaskDefinition_ProxyConfiguration
=== CONT  TestAccAWSEcsTaskDefinition_basic
=== CONT  TestAccAWSEcsTaskDefinition_constraint
=== CONT  TestAccAWSEcsTaskDefinition_ProxyConfiguration
=== CONT  TestAccAWSEcsTaskDefinition_Tags
=== CONT  TestAccAWSEcsTaskDefinition_Inactive
=== CONT  TestAccAWSEcsTaskDefinition_ExecutionRole
=== CONT  TestAccAWSEcsTaskDefinition_Fargate
=== CONT  TestAccAWSEcsTaskDefinition_arrays
=== CONT  TestAccAWSEcsTaskDefinition_changeVolumesForcesNewResource
=== CONT  TestAccAWSEcsTaskDefinition_withEcsService
=== CONT  TestAccAWSEcsTaskDefinition_withPidMode
=== CONT  TestAccAWSEcsTaskDefinition_withIPCMode
=== CONT  TestAccAWSEcsTaskDefinition_withNetworkMode
=== CONT  TestAccAWSEcsTaskDefinition_withTaskRoleArn
=== CONT  TestAccAWSEcsTaskDefinition_withDockerVolumeMinimalConfig
=== CONT  TestAccAWSEcsTaskDefinition_withTaskScopedDockerVolume
=== CONT  TestAccAWSEcsTaskDefinition_withDockerVolume
=== CONT  TestAccAWSEcsTaskDefinition_withScratchVolume
--- PASS: TestAccAWSEcsTaskDefinition_withScratchVolume (35.63s)
--- PASS: TestAccAWSEcsTaskDefinition_constraint (36.58s)
--- PASS: TestAccAWSEcsTaskDefinition_withTaskScopedDockerVolume (37.20s)
--- PASS: TestAccAWSEcsTaskDefinition_withDockerVolumeMinimalConfig (38.06s)
--- PASS: TestAccAWSEcsTaskDefinition_arrays (38.34s)
--- PASS: TestAccAWSEcsTaskDefinition_withDockerVolume (38.34s)
--- PASS: TestAccAWSEcsTaskDefinition_ProxyConfiguration (42.32s)
--- PASS: TestAccAWSEcsTaskDefinition_withTaskRoleArn (48.37s)
--- PASS: TestAccAWSEcsTaskDefinition_withPidMode (48.94s)
--- PASS: TestAccAWSEcsTaskDefinition_withIPCMode (48.99s)
--- PASS: TestAccAWSEcsTaskDefinition_withNetworkMode (49.07s)
--- PASS: TestAccAWSEcsTaskDefinition_ExecutionRole (51.01s)
--- PASS: TestAccAWSEcsTaskDefinition_Fargate (54.54s)
--- PASS: TestAccAWSEcsTaskDefinition_Inactive (61.19s)
--- PASS: TestAccAWSEcsTaskDefinition_changeVolumesForcesNewResource (63.12s)
--- PASS: TestAccAWSEcsTaskDefinition_basic (66.97s)
--- PASS: TestAccAWSEcsTaskDefinition_Tags (86.41s)
--- PASS: TestAccAWSEcsTaskDefinition_withEcsService (119.73s)
PASS
ok      github.com/terraform-providers/terraform-provider-aws/aws       122.003s

@ghost ghost added size/M Managed by automation to categorize the size of a PR. needs-triage Waiting for first response or review from a maintainer. service/ecs Issues and PRs that pertain to the ecs service. labels Jan 7, 2020
@sworisbreathing
Copy link
Contributor Author

sworisbreathing commented Jan 7, 2020

Basic manual testing looks good however there's still a fair amount of work to be done before this PR is ready:

  • Passes testing. A few tests are failing, but at this point I haven't modified any test cases yet.
  • Avoids CustomizeDiff. I've used it in this particular case. Not sure if I can (or should) avoid it. (Update: I've thought about this a bit, and I'm pretty sure that due to the special logic in EcsContainerDefinitionsAreEquivalent, I think a CustomizeDiff is unavoidable)
  • Acceptance test coverage of new behavior. Not yet clear whether I need separate acceptance tests or if I should just update the existing ones. I'll figure that out once I get the existing tests to pass :-) (edit: now that the acceptance tests are passing, I think the existing tests should cover the changes in functionality)
  • Documentation updates

@ghost ghost added tests PRs: expanded test coverage. Issues: expanded coverage, enhancements to test infrastructure. documentation Introduces or discusses updates to documentation. size/L Managed by automation to categorize the size of a PR. and removed size/M Managed by automation to categorize the size of a PR. labels Jan 8, 2020
@sworisbreathing sworisbreathing marked this pull request as ready for review January 8, 2020 04:11
@sworisbreathing sworisbreathing requested a review from a team January 8, 2020 04:11
@sworisbreathing sworisbreathing changed the title [WIP] Treat new revisions of ECS task definitions as updates instead of new resources Treat new revisions of ECS task definitions as updates instead of new resources Jan 8, 2020
@sworisbreathing
Copy link
Contributor Author

sworisbreathing commented Jan 8, 2020

Okay I'm pretty sure I've got this ready for review now. Some notes for the reviewer:

  • Acceptance tests were executed in ap-southeast-2. Due to my company's policies I am not able to spin up resources in us-west-2.
  • A few of the test cases fail intermittently, but I don't believe it's anything related to my changes. The intermittent failures all appear to be either Error: ClientException: Too many concurrent attempts to create a new revision of the specified family... (which looks to be just a rate-limiting problem) or incorrect tag errors such as Check failed: Check 2/3 error: aws_ecs_task_definition.test: Attribute 'tags.%' expected "1", got "0" (which I think is a race condition caused by resource tagging being eventually consistent).
  • migrateEcsTaskDefinitionstateV0toV1() still uses the arn attribute instead of the task family (old ID vs new ID). I've got a stashed change for this in my local copy but I'm not sure if it's needed, so I haven't pushed it up.

@sworisbreathing
Copy link
Contributor Author

rebased to fix write conflicts and force-pushed.

@sworisbreathing
Copy link
Contributor Author

sworisbreathing commented Feb 6, 2020

the current travis failure appears to be unrelated to my change, and looks like there might have been a disruption between travis and github.

The travis job for #11892 looks like it had similar problems.

@kellym56
Copy link

@sworisbreathing - any update on this?

@sworisbreathing
Copy link
Contributor Author

@kellym56 I don't have any more information than what you see here on the PR. Still waiting for review/approval from the repository maintainers. If you sort the open PRs by 👍 this one's sitting about 3/4 of the way down page 2 so there's still a few prioritized above it.

I'm really keen to see it merged because I'm getting tired of doing a terraform state rm every time I need to deploy a new build in order to keep the old task from being deleted (especially annoying since, for some reason I haven't quite figured out, I have to do it twice)

@sworisbreathing sworisbreathing force-pushed the gh-632-update-task-def branch from 05b6e51 to 949d869 Compare April 14, 2020 02:57
@sworisbreathing
Copy link
Contributor Author

Hi @bflad and @anGie44 can I please get a reviewer assigned to this PR?

@sworisbreathing
Copy link
Contributor Author

sworisbreathing commented Jul 21, 2020

Yo how's this effort going?

@siassaj I've yet to hear back from any of the repository maintainers, and no one's been assigned to or reviewed the PR despite repeated requests through multiple channels:

As best I can tell there's been no movement on it.

@david74chou
Copy link

yes, really looking forward to have this PR landed...

@siassaj
Copy link

siassaj commented Jul 23, 2020

@sworisbreathing idiotically I didn't realise it had been under review this whole time, sorry for badgering you.

I do look forward to it, though... managing the definitions in terraform & ci has been ...unfun

@sworisbreathing
Copy link
Contributor Author

Hi @maryelizbeth @bflad @gdavison @anGie44 @breathingdust @ksatirli, can we please start the ball rolling on the review process for this PR?

@sworisbreathing
Copy link
Contributor Author

Also tagging @YakDriver for possible review

Base automatically changed from master to main January 23, 2021 00:56
@breathingdust breathingdust requested a review from a team as a code owner January 23, 2021 00:56
@sworisbreathing
Copy link
Contributor Author

I've just updated to resolve a conflict that appeared due to changes that got merged after I raised the PR nearly a year and a half ago. Pretty sure this happened once before but I rebased that time so I don't have visibility of it. This time I resolved through GitHub and it got added as a merge commit.

@sworisbreathing
Copy link
Contributor Author

sworisbreathing commented May 6, 2021

ping @maryelizbeth @bflad @gdavison @anGie44 @breathingdust @ksatirli @YakDriver.
Can we please start the review process? As of tomorrow this PR will have been open for 17 months.

@breathingdust breathingdust added enhancement Requests to existing resources that expand the functionality or scope. and removed needs-triage Waiting for first response or review from a maintainer. labels Oct 6, 2021
@zhelding
Copy link
Contributor

Pull request #21306 has significantly refactored the AWS Provider codebase. As a result, most PRs opened prior to the refactor now have merge conflicts that must be resolved before proceeding.

Specifically, PR #21306 relocated the code for all AWS resources and data sources from a single aws directory to a large number of separate directories in internal/service, each corresponding to a particular AWS service. This separation of code has also allowed for us to simplify the names of underlying functions -- while still avoiding namespace collisions.

We recognize that many pull requests have been open for some time without yet being addressed by our maintainers. Therefore, we want to make it clear that resolving these conflicts in no way affects the prioritization of a particular pull request. Once a pull request has been prioritized for review, the necessary changes will be made by a maintainer -- either directly or in collaboration with the pull request author.

For a more complete description of this refactor, including examples of how old filepaths and function names correspond to their new counterparts: please refer to issue #20000.

For a quick guide on how to amend your pull request to resolve the merge conflicts resulting from this refactor and bring it in line with our new code patterns: please refer to our Service Package Refactor Pull Request Guide.

@YakDriver
Copy link
Member

@sworisbreathing Thank you for your contribution! The main gist of this PR is covered by #22269. However, some portions, such as the improvements to importing, are not. I recommend moving those enhancements to a new PR once #22269 is merged.

@github-actions
Copy link

This functionality has been released in v3.72.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@sworisbreathing
Copy link
Contributor Author

sworisbreathing commented Jan 23, 2022

@YakDriver let me see if I understand correctly....

  • you (the provider maintainers) ignore this PR for 2 years, despite repeated requests for code review through multiple channels
  • you (the provider maintainers) close the underlying feature request saying it's not the way the provider should work, and blaming the design of the AWS APIs (despite the fact that you do already transparently treat new revisions of Lambdas and IAM policies as updates, and also manage multiple AWS resources in the aws_security_group's ingress{} and egress{} blocks)
  • you (the provider maintainers) then roll your own PR to implement a subset of the functionality, and then (contrary to your own stated policies as to how PRs are prioritised for review) merge that one even though it has fewer upvotes than this one.
  • and then you have the gall to ask for a new pull request to implement additional functionality from this one that you didn't even bother to look at for 2 years?

Forgive me if I sound a bit bitter, but there's a real "not invented here" vibe coming out of all of this. Given y'all's attitude towards community contributions, why should I raise a new PR that's just going to be ignored until one of the provider maintainers decides they'd rather write their own code for it instead?

@justinretzolk
Copy link
Member

Hey @sworisbreathing 👋 I wanted to follow up here to acknowledge that the experience you had here isn't how we want things to go, and let you know that we're taking steps to make sure this kind of thing doesn't happen in the future. We care a lot about our community and hopefully the length of this reply, and that four different people worked on it, helps show that.

First thing's first; an introduction: My name is Justin and I am the technical community manager for the AWS provider. I started this role in September, and was the second hire of this type of role at HashiCorp (though for what it's worth, I have been with the company since 2018). The entire point of my role is to be a liaison between the company and our community, to try to make sure we're doing right by folks who are contributing back to our tools, like you've done here.

As you may have noticed, the AWS provider gets a lot of attention (at the time of writing, there's around 3.1k open issues and 475 open PRs, which is down from the peak of 820 open PRs in early/mid 2021!), which is why the AWS provider was one of the first areas that received a technical community manager. We also have a new developer starting soon, and look to be hiring another later this year. With these new members, we hope to have more time to focus on the current backlog and addressing efforts from the community. This isn't an excuse for what you experienced, but rather an acknowledgement that we needed to do better, and are actively trying to.

I read over the history of this PR and #22269 to try to get a better idea of what happened here so that we could have a better understanding of where things went wrong, so that we could learn from it. I wanted to relay some of those findings to you here for the sake of transparency. This got a little lengthy as I typed it, but I feel you deserve as much information as I can give.

I see that a few comments above you noted the ":+1:" prioritization; that is still part of how we prioritize things, and while I can't speak to the history of whether other things wound up getting more reactions after your comment, I can only infer that likely what happened is that other things came up and got prioritized above this functionality for one reason or another (for what it’s worth, there are other things that are taken into consideration when prioritizing). I also wanted to touch on the fact that you'd @ mentioned a few of our engineers a couple of times to no avail. Unfortunately, this method isn't always as fruitful as we'd like simply due to the sheer volume of times where our engineers are tagged on PRs and issues. I'm hopeful that with my being here now, I'll be able to keep up with this sort of thing more often and hopefully fill that gap.

Because #632 was closed, it was no longer on our radar when it came time to review for work that needed to be completed by the next milestone. On the other hand #258 was on our radar, as it was still open and had 49 ":+1:" reactions when @YakDriver picked it up, which is why we began to work on a new PR to fix it. It's our practice to search for other issues and PRs when starting work on fixing an issue, to try to catch situations like this where someone else has already opened a PR -- frankly, it's in our best interest to not repeat work that's already done!

In looking at the edit history of the PR description of #22269, I see that @YakDriver looked around for related PRs. However, the engineering team had agreed on a new approach for these types of situations, reflected in #11997 - merged Nov 18, 2021, that avoided breaking changes. The approach was not taken in any of the PRs, including #11506. We strive to apply new patterns as consistently as possible. We recognize that you would have had no way of knowing, unless you followed the Lambda PR.

In addition, on 2021-10-19, we refactored the repository leading to all open PRs needing to be altered in order to resolve merge conflicts. We were (and are) unable to consider merging any PRs that have not had these conflicts resolved. Given that @YakDriver's PR followed the new pattern and was built on the refactored structure, it took precedence.

I, and the rest of the team, recognize that this was not a great experience for you, and I speak for all of us when I say that we understand your frustration, most of us having been on the contributor side also, and we're sorry to have caused it. I hope that this experience doesn't put you off of contributing, and that this breakdown helps in some way. If you'd like to continue the discussion, I'd be more than happy to. We can do so here, or if you'd prefer you can send me a message on Discuss (I use the same username there, so I'm relatively easy to find).

@sworisbreathing
Copy link
Contributor Author

Hi @justinretzolk

Hey @sworisbreathing 👋 I wanted to follow up here to acknowledge that the experience you had here isn't how we want things to go, and let you know that we're taking steps to make sure this kind of thing doesn't happen in the future. We care a lot about our community and hopefully the length of this reply, and that four different people worked on it, helps show that.

First thing's first; an introduction: My name is Justin and I am the technical community manager for the AWS provider. I started this role in September, and was the second hire of this type of role at HashiCorp (though for what it's worth, I have been with the company since 2018). The entire point of my role is to be a liaison between the company and our community, to try to make sure we're doing right by folks who are contributing back to our tools, like you've done here.

As you may have noticed, the AWS provider gets a lot of attention (at the time of writing, there's around 3.1k open issues and 475 open PRs, which is down from the peak of 820 open PRs in early/mid 2021!), which is why the AWS provider was one of the first areas that received a technical community manager. We also have a new developer starting soon, and look to be hiring another later this year. With these new members, we hope to have more time to focus on the current backlog and addressing efforts from the community. This isn't an excuse for what you experienced, but rather an acknowledgement that we needed to do better, and are actively trying to.

Wow. Thank you. It's great to see an acknowledgement that things went wrong, and even more so the appointment of a community manager for the project to help improve the engagement with the open source community.

I see that a few comments above you noted the "👍" prioritization; that is still part of how we prioritize things, and while I can't speak to the history of whether other things wound up getting more reactions after your comment, I can only infer that likely what happened is that other things came up and got prioritized above this functionality for one reason or another (for what it’s worth, there are other things that are taken into consideration when prioritizing). I also wanted to touch on the fact that you'd @ mentioned a few of our engineers a couple of times to no avail. Unfortunately, this method isn't always as fruitful as we'd like simply due to the sheer volume of times where our engineers are tagged on PRs and issues. I'm hopeful that with my being here now, I'll be able to keep up with this sort of thing more often and hopefully fill that gap.

I figured as much. Ultimately the bulk of the provider maintainers are employed directly by Hashicorp, and it's understandable that you'll need to balance your commercial priorities with the community ones. But the contributing guide should reflect that reality. It'd be nice to have a bit more transparency around this process, or at the very least least, call it out in the contributing guide that upvotes are only part of the way issues are prioritised.

As for the @ mentioning, that was sort of a last resort after trying the other "official" paths stated in the contributing guide. For what it's worth, I didn't see a lot of responses to other folks' requests for PR reviews from gitter/discuss either.

Because #632 was closed, it was no longer on our radar when it came time to review for work that needed to be completed by the next milestone. On the other hand #258 was on our radar, as it was still open and had 49 "👍" reactions when @YakDriver picked it up, which is why we began to work on a new PR to fix it. It's our practice to search for other issues and PRs when starting work on fixing an issue, to try to catch situations like this where someone else has already opened a PR -- frankly, it's in our best interest to not repeat work that's already done!

I'll argue that the closing of #632 should have been handled differently. I would have liked to have seen a bit more discussion between the community and the maintainers first. And there were several requests that the comment thread not be locked so users could continue to discuss workarounds. But those requests went unheeded and comments were locked anyway.

I'm not sure why I hadn't tagged @YakDriver on this PR along with the other maintainers. Possibly he wasn't listed in the docs I was looking at? I don't really remember. But @breathingdust had seen it and flagged it for review. The last time I checked the open PRs, it was only on page 2 if you sorted by 👍 reactions, and probably would have been high on the list of you'd searched open PRS for ECS, sorted by 👍

In looking at the edit history of the PR description of #22269, I see that @YakDriver looked around for related PRs. However, the engineering team had agreed on a new approach for these types of situations, reflected in #11997 - merged Nov 18, 2021, that avoided breaking changes. The approach was not taken in any of the PRs, including #11506. We strive to apply new patterns as consistently as possible. We recognize that you would have had no way of knowing, unless you followed the Lambda PR.

#22269 appears to have been merged without a code review. Granted, @YakDriver did the code review for #11997, but still, it might have been worth further discussion. For both of those, I would argue that skip_destroy is something that rightfully belongs as a lifecycle hook in terraform itself, rather than something a plugin developer needs to explicitly implement. Better still would be a lifecycle hook to specifically treat destroy-and-recreate as an update, which would still allow for explicit destroys.

In addition, on 2021-10-19, we refactored the repository leading to all open PRs needing to be altered in order to resolve merge conflicts. We were (and are) unable to consider merging any PRs that have not had these conflicts resolved. Given that @YakDriver's PR followed the new pattern and was built on the refactored structure, it took precedence.

If you look at the history on this PR, I rebased it 3 different times due to other stuff getting merged in. This contributed a bit to my frustration, considering that some of those PRs were newer than mine and/or had fewer votes. I did notice merge conflicts after the provider refactor, but I didn't see the point of putting in the effort of another rebase, when none of the maintainers was looking at the PR anyway.

I, and the rest of the team, recognize that this was not a great experience for you, and I speak for all of us when I say that we understand your frustration, most of us having been on the contributor side also, and we're sorry to have caused it. I hope that this experience doesn't put you off of contributing, and that this breakdown helps in some way. If you'd like to continue the discussion, I'd be more than happy to. We can do so here, or if you'd prefer you can send me a message on Discuss (I use the same username there, so I'm relatively easy to find).

The experience did put me off of making additional contributions, but your reply was very thoughtful and considerate, and exactly the sort of thing that would make me willing to contribute in the future.

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Introduces or discusses updates to documentation. enhancement Requests to existing resources that expand the functionality or scope. service/ecs Issues and PRs that pertain to the ecs service. size/L Managed by automation to categorize the size of a PR. tests PRs: expanded test coverage. Issues: expanded coverage, enhancements to test infrastructure.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants