Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete Construct Model #611

Open
awsmjs opened this issue Jan 18, 2024 · 5 comments
Open

Complete Construct Model #611

awsmjs opened this issue Jan 18, 2024 · 5 comments

Comments

@awsmjs
Copy link
Contributor

awsmjs commented Jan 18, 2024

Description

It’s clear that constructs are imperative to the CDK experience. We repeatedly hear from CDK builders that the most important improvement that the CDK can make for its construct experience would be to provide greater service coverage. Because of this, the CDK team is investigating how we can provide more comprehensive AWS service coverage. In the past, the CDK has relied on community and in-house authored constructs to address construct demand. In the future, our goal is to bring CDK builders a complete and consistent construct model through largely autogenerated means. This model would provide builders with Day 1 support of AWS services--the moment services are launched--complete with the features that builders love in their L2s. As we are investigating this effort, it’s possible that there will be gradual rollouts for us to realize this goal. If you have any feedback or comments on this, we would highly appreciate you sharing them as a comment within this RFC.

@alecl
Copy link

alecl commented Feb 29, 2024

It would be great to prioritize the EventBridge Scheduler L2 targets moving out of alpha. It's tough to justify moving to an API that may have vastly breaking changes (what Alpha implies) happen at any time.

In particular, the Lambda Invoke one is the highest value for my groups and likely many others.

https://github.com/aws/aws-cdk/blob/main/packages/%40aws-cdk/aws-scheduler-targets-alpha/lib/lambda-invoke.ts

@gshpychka
Copy link

To be honest, it's not clear how you could achieve L2s via autogenerated means - could you give an example? As far as I understand, the purpose of L2 Constructs is to provide developers with opinionated abstractions - someone in the loop must be injecting these opinions.

@bdoyle0182
Copy link

bdoyle0182 commented Dec 10, 2024

@gshpychka an LLM with a baseline ruleset / design principles made by aws / the community to follow could make those opinionated decisions :)

AWS / the community can review and validate the principles were followed in the generation, which is still a significantly decrease in human development effort to add new constructs

@wladyslawczyzewski
Copy link

IMO, LLM will add more issues than value. L2 constructs (as well as L3) requires at least some understanding how the services works and what CloudFormation expects in the templates. Feeding the cloudformation docs to LLM will not solve the issue, as sometimes (often?) the documentation either outdated or not complete and the hands-on experience with the service is required to design and implement the working solution.

@bdoyle0182
Copy link

bdoyle0182 commented Feb 17, 2025

The point of the LLM is not to fully automate the generation process. You can provide it rules and design principles that are consistent across all L2's in the code it generates, plus both src and unit / integration tests. There will still be a human in the loop to review and make changes on the generated code. This would significantly lower the engineering effort to keep up and maintain L2's across the full AWS API. Most of the burden on AWS is budgeting a limited number of engineers that understand the service to design and then write an implementation which an LLM could do in a few minutes. The community / AWS responsibility is keeping up and improving the guardrails / design principles followed for L2's to improve the experience / keep consistency

The development flow could look something like this:

  1. Agent auto produces or prompted manually by a human a branch w/ proposed L2 for new L1 resource type
  2. Community / service experts are given a chance to give feedback on what the agent produced
  3. Once feedback is collected, AWS engineer or community member is allocated to make changes / corrections based on expertise and provided feedback.
  4. Human submits final PR with L2, MR Agent with another ruleset for L2 design principles reviews immediately. Then community can give final reviews.
  5. L2 is accepted and now falls under required supported maintenance.

Mindset at least at this point in the ai journey is how can LLM improve project velocity / reduce developer overhead w/ repetitive work / problems solved through agents, not how can I use this tool to build and release something production ready for me without me in the loop as the overseer of all generated work. Just like any other management chain, when you sign off on something you take responsibility that you've reviewed and approved the work done taking on the liability.

That doesn't take a human fully out of the loop to review the work done is of required quality, but reduces the amount of work an individual has to do to get an L2 out there and released by an order of magnitude.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants