You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This can be a bit of a topic of discussion. But here is some context.
Over the last year, I have experimented with a few deployment models supported by Chalice. While I acknowledge that for advanced use cases, CDK or Terraform or SAM is needed, the simple 'chalice deploy' has value even beyond just dev workflows.
One reason is that for 90% of the small to medium sized business, static environments (through separated static VPCs and/or separated accounts) like dev -> test -> prod is good enough. Over time they can improve their level of sophistication but I believe they would still want to eventually stick with 'chalice deploy'.
My simple scheme through GitHub workflows works perfectly for this. With one exception and that also results in a question.
Was 'chalice deploy' meant to set the state of the app inside AWS to how it is defined in the code? Was it a deliberate design decision to only do upsert type things when changing the state of the app inside AWS?
For example, I have noticed that if I rename a pure Lambda handler, the old one still stays in place. And if it is driven by an event schedule, that's not that great because I have to remove it by hand with some deadline (in minutes or hours).
In the future, are there plans to add that option, i.e. a flag in the config which will clean up things during a deploy?
The text was updated successfully, but these errors were encountered:
chalice deploy is not transactional and has a number of issues (it always updates, doesn't delta diff), which means its always going to do more work and be slower then an actual infra deployment tool on larger projects as well have the potential of leaving behind junk. its not a proper infrastructure tool, it only cleans up based on the current configuration, not past deployments. if you really want a goal based / declarative infra tool that tracks provisioned resource state and cleanups properly you should use terraform or cloudformation.
if you want transactional deploys as that phrase is commonly used you need to use cloudformation package output, cfn will automatically rollback deployments on error. your usage of the phrase is about cleaning up on changes, which either terraform or cfn will do.
i tried to start improving the internal chalice deploy for idempotent deploys (ie deterministic zip file gen), but the internal code structure of chalice is such that its hard to make these kinda of changes (multi thousand line module files) to improve the internal provisioning.. everytime i contributed to chalice ( terraform and automatic layer support for example) every single file conflicted between the pr due to this poor code structure, coupled with less than ideal maintainer practices on this repo (1+ year for pr feedback for some of those prs). honestly chalice ditching its internal provisioning would reduce the code base thousands of line, and given the lack of features on the internal provisioning, its probably a net win for users. it would also reduce the level of effort for supporting commonly requested features like #1321
This can be a bit of a topic of discussion. But here is some context.
Over the last year, I have experimented with a few deployment models supported by Chalice. While I acknowledge that for advanced use cases, CDK or Terraform or SAM is needed, the simple 'chalice deploy' has value even beyond just dev workflows.
One reason is that for 90% of the small to medium sized business, static environments (through separated static VPCs and/or separated accounts) like dev -> test -> prod is good enough. Over time they can improve their level of sophistication but I believe they would still want to eventually stick with 'chalice deploy'.
My simple scheme through GitHub workflows works perfectly for this. With one exception and that also results in a question.
Was 'chalice deploy' meant to set the state of the app inside AWS to how it is defined in the code? Was it a deliberate design decision to only do upsert type things when changing the state of the app inside AWS?
For example, I have noticed that if I rename a pure Lambda handler, the old one still stays in place. And if it is driven by an event schedule, that's not that great because I have to remove it by hand with some deadline (in minutes or hours).
In the future, are there plans to add that option, i.e. a flag in the config which will clean up things during a deploy?
The text was updated successfully, but these errors were encountered: