-
Notifications
You must be signed in to change notification settings - Fork 87
Allow arbitrary service quota requests #97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Upgrade test results for build 732
|
| NAT_GW_PER_AZ = { | ||
| service_code = "vpc" | ||
| quota_code = "L-FE5A380F" | ||
| desired_quota = 5 | ||
| }, | ||
| RULES_PER_ACL = { | ||
| service_code = "vpc" | ||
| quota_code = "L-2AEEBF1A" | ||
| desired_quota = 20 | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having to manually look up service_code and quota_code is tedious and error prone.
If you already have a script that can generate a Markdown file with all that data, could you update that script to generate TF code? The code could then give each quota an input var name to enable that quota and the module would internally handle all the service_code and quota_code details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Issue with having an input var for each quota is that Terraform doesn't allow iterating all input variables. As such (with ~4300 configurable quotas), the resulting Terraform files would be huge, as you would have to manage the mappings per each individual entry.
As an example, consider the following auto generated input var
variable "rekognition_transactions_per_second_per_account_for_the_amazon_rekognition_face_liveness_operation_startfacelivenesssession" {
description = "Adjustable quota for Amazon Rekognition - Transactions per second per account for the Amazon Rekognition Face Liveness operation StartFaceLivenessSession"
type = number
default = 0
}
You would have to have the comparison for non-zero value somewhere - locals, resource declaration,... And you would have to do this for each of the 4000+ other inputs.
If we wanted to generate mappings for the quotas like they are now, we'd end up with a map of maps with thousands of entries:
codes = {
....
nacl_rules = {
quota_code = "L-2AEEBF1A"
service_code = "vpc"
},
nat_gateway = {
quota_code = "L-FE5A380F"
service_code = "vpc"
},
rekognition_transactions_per_second_per_account_for_the_amazon_rekognition_face_liveness_operation_startfacelivenesssession = {
quota_code = "XXXXXXXX"
service_code = "rekognition"
},
}
...
and you would still have to search the keys from the source code.
The only thing to gain here is not having to lookup the quota/service tuple, just the internal map key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No perfect solution here. One approach could be a general purpose module like this, coupled with a wrapper module with a vastly reduced number of inputs/configurable quotas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree there's no perfect solution here.
That said, what is the problem with having 4,000 input vars if it is all auto-generated anyway? Does Terraform not allow that?
The advantage of using input vars is we get auto-complete in IDEs that support it, which will make finding the inputs you want a little easier just by starting to type and hitting CTRL + SPACE. Also, we get some basic type checking, and our auto-generated docs would allow you to find what you need pretty easily as well with CTRL + F.
I understand 4,000 variables is annoying, but that's the nature of this domain, which has 4,000 quotas. It seems to me like 4,000 named inputs, with type safety, typeahead, descriptions, and auto-generated docs is the least bad option here, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll give it a shot 😄 . To get auto-complete, we have to use input vars and probably default the quotas to null to distinguish which quota requests we actually want to submit.
Also, are we happy with python or should this be like https://github.com/gruntwork-io/terraform-aws-security/tree/main/codegen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll give it a shot 😄 . To get auto-complete, we have to use input vars and probably default the quotas to
nullto distinguish which quota requests we actually want to submit.
Good point.
Also, are we happy with python or should this be like https://github.com/gruntwork-io/terraform-aws-security/tree/main/codegen?
Whatever is easiest/fastest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the latest commit, the quota request module is now auto-generated. However, there were certain issues. Some of the quotas end up as duplicate, so I had to drop a few. Which code is the correct, that I don't know. These are the duplicates:
networkmonitor_number_of_probes_per_monitor: L-F192A8D6
networkmonitor_number_of_monitors_per_account_per_aws_region: L-15A52C9B
networkmonitor_number_of_probes_per_subnet_for_each_monitor: L-6A0F5BB1
networkmonitor_number_of_probes_per_monitor: L-4C539B84
networkmonitor_number_of_probes_per_subnet_for_each_monitor: L-417BD1FB
networkmonitor_number_of_monitors_per_account_per_aws_region: L-24751C61
transcribe_transactions_per_second_listcallanalyticscategories_operation: L-B4737A9F
transcribe_transactions_per_second_getcallanalyticscategory_operation: L-A07285C7
transcribe_transactions_per_second_createcallanalyticscategory_operation: L-020BD1D2
transcribe_maximum_number_of_rules_per_category_for_call_analytics_batch_jobs: L-2E269322
transcribe_the_number_of_concurrent_startcallanalyticsstreamtranscription_http_2_streaming_requests_: L-128229BB
transcribe_transactions_per_second_listmedicalvocabularies_operation: L-580FD495
transcribe_transactions_per_second_startmedicaltranscriptionjob_operation: L-BF30671E
transcribe_number_of_concurrent_call_analytics_batch_jobs: L-48FC5F8A
transcribe_transactions_per_second_startcallanalyticsstreamtranscription_websocket_operation: L-A0A9F543
transcribe_transactions_per_second_updatecallanalyticscategory_operation: L-B24B8474
transcribe_maximum_number_of_categories_for_call_analytics_batch_jobs: L-E9626652
transcribe_transactions_per_second_startcallanalyticsstreamtranscription_operation: L-EF81BED4
transcribe_transactions_per_second_startmedicalstreamtranscription_operation: L-A3F1B3F5
transcribe_transactions_per_second_deletemedicaltranscriptionjob_operation: L-AAFCE0A3
transcribe_transactions_per_second_updatemedicalvocabulary_operation: L-765784F2
transcribe_transactions_per_second_deletecallanalyticsjob_operation: L-E9D8884B
transcribe_transactions_per_second_getcallanalyticsjob_operation: L-2F86860B
transcribe_transactions_per_second_startcallanalyticsjob_operation: L-4B321684
transcribe_transactions_per_second_getmedicalvocabulary_operation: L-16E31C23
transcribe_number_of_pending_medical_vocabularies: L-836A857F
transcribe_number_of_concurrent_medical_batch_transcription_jobs: L-63F366BB
transcribe_transactions_per_second_deletecallanalyticscategory_operation: L-40CC32CD
transcribe_total_number_of_medical_vocabularies_per_account: L-79BBEFC1
transcribe_number_of_startmedicalstreamtranscription_websocket_requests: L-0FB6DE48
transcribe_transactions_per_second_getmedicaltranscriptionjob_operation: L-A99534C1
transcribe_transactions_per_second_listmedicaltranscriptionjobs_operation: L-2D4ED180
transcribe_transactions_per_second_deletemedicalvocabulary_operation: L-50B57C16
transcribe_transactions_per_second_listcallanalyticsjobs_operation: L-93564E36
transcribe_maximum_number_of_targets_allowed_per_category_for_call_analytics_batch_jobs: L-3533BA28
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One final option to consider is adding an include or excludes list and we only generate included or those not in the exclude list.
This one is ready for review... haven't properly cleaned up or documented the code until we determine if this is the way to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, the dupes are weird!
As long as the module handles them in a deterministic way, I'm happy with whatever option you've chosen. If we need to change that later, we can. For now, the only thing I'd do is ensure that we document (in the README, Python code, etc) that there are dupes, so future maintainers can see we've thought about it and understand how we've handled it.
|
Upgrade test results for build 738
|
brikis98
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
|
Upgrade test results for build 745
|
|
Added improved duplicate handling and documentation. Need a re-review @brikis98 |
brikis98
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, clever way to handle the dupes.
LGTM!
Description
TODOs
Read the Gruntwork contribution guidelines.
Release Notes (draft)
request-quota-increasemodule to allow arbitrary quota requestsMigration Guide
request-quota-increaseThe input format for
resources_to_increaseinrequest-quota-increasewas removed. Change the input values to use quota-specific input variables, e.g.