-
Notifications
You must be signed in to change notification settings - Fork 976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes_manifest listing all CRDs each time #1651
Comments
Hi, thanks for opening this conversation. In fact, the provider needs to make sure it knows about any CRDs that may have been created during the same operation, so that it can handle any CRs of that type. However, I think there is room for optimizing the number of these calls. We previously avoided to introduce any optimisations until the provider had stabilized enough. In fact, we actually had to roll-back some caching we had introduced too early that was causing hard to diagnose issues. At this point, I think we can take a look at reducing the number of CRD retrieval calls. |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
this is starting to hurt us really bad, we have a set of 20 resources (flux) that are being created and it takes ~20 minutes to run a plan and often it times out . Please can this be prioritised asap? |
fwiw we moved to kubectl_manifest and so far it's been much better |
I can try that but it will be good to address this issue too |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
Just noting this is still an active issue with the upstream Hashi provider here so the bot doesn't close this issue - generally speaking if you're managing CRDs or CRs at all (or anything else applied via bare YAML) you shouldn't use Hashi's provider. |
This continues to be quite slow, any ETA on whether this is going to be addressed? |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
kubernetes_manifest
Steps to Reproduce
terraform plan
Expected Behavior
The kubernetes provider may query the API server once to get a list of known CRDs, and cache the result for subsequent resource reads
Actual Behavior
It appears that the provider executes a
LIST
query of/apis/apiextensions.k8s.io/v1/customresourcedefinitions
for eachkubernetes_manifest
resource. Somehow it's not caching the results, and this makes running plan unnecessarily slow. This is occurring even withkubernetes_manifest
of basic builtin types likeConfigMap
Here is the stack trace:
And here is a pprof sorted by cumulative time:
References
Community Note
The text was updated successfully, but these errors were encountered: