-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
<name> failed to fetch resource from kubernetes: the server could not find the requested resource #270
Comments
First thought - could this be related to the fact that the resource is not a namespaced resource, but rather a cluster resource ( like a cluster role )? |
I'm testing with lower kubernetes versions. Possibly related to the helm charts being incompatible with those versions. |
Problem appears resolved on EKS v1.26. |
i am seeing same issue on EKS kubernetes 1.27 with karpenter 0.27.5 and kubectl provider 1.14.0 |
I' am also seeing same behavior. EKS 1.27, Kubectl provider 1.14.0. I'm trying just apply ENIconfig after EKS creation with multiple subnets. Downgrade to EKS 1.26 resolved it. |
My EKS 1.27 also has the issue related to the provider and karpenter:
I found in the EKS logs that a request was made to the Kubernetes API with an incorrect path, resulting in a 404 error response. It appears that the provider is making the request with an extra ApiVersion, causing it to be duplicated in the path.
Full log entry:
|
Also getting this in EKS 1.27, but in my case, I'm creating a ClusterSecretStore for external-secrets. Edit, gonna try OK, it worked. However, I had to download the chart and store it with my module because it's no longer hosted above. But it's a temporary workaround for my issue - the raw chart applies my ClusterSecretStore. |
Yup same here EKS 1.27 cluster wide eniconfig |
Same here, happens with multiple resources - node template and provisioner for karpenter or horizontalrunnerautoscaler for actions-runner-controller
|
Seems to be a generic issue for 1.27 - @gavinbunney any change you could look into this? Downgrading is not an options for many clusters... |
More debugging ensued, switching to the kubernetes_manifest resource kinda figured there is a delta between what is being applied from the yaml and what the kube control plane returns the object as. For example
Checking the resource yaml on the cluster and reconciling it back to the code fixed the issue on the kubernetes_manifest provider. Another example
Suggest trying the same with your resources and analyzing, or you could switch to the kubernetes_manifest resource. |
I have same issue trying to apply EniConfig resource on EKS cluster version 1.27. It worked on first apply but failed after that. |
Since everyone in this issue seems to be talking about EKS, I'd like to add that we've run into the same issue with kubeadm based clusters on GCP. So it's definitely a 1.27 thing and not platform specific. |
I'll also add that I've started seeing this since upgrading a K3S cluster to |
First of all, is there currently a way to bypass this issue? |
@karmajunkie Thanks for the super fast feedback. Are you sure you meant to use the resources in |
@seungmun yep, that's the one. I'm not sure if that's going to be universally true but my needs are pretty straightforward. |
The Problem with The other option is to use a |
I believe you also can't create e.g. the EKS cluster from a scratch and apply your manifests in the same terraform step with Let's be realistic though - this project has been abandoned. It would be awesome if @gavinbunney had time once more to update the code or add some other guys as maintainers but that might be unlikely. |
For me it's the mentioned problem with CRDs and CRs. I'm fairly sure the problem is related to the versions of the libraries used (prime suspect being the ancient Kubernetes client libraries) but I've never built a Terraform provider. |
I've tried to create a fork and update the go modules. While this worked eventually to build some binaries after some minor code changes, the subsequent I am not versed enough to continue from here onwards but it might be that a simple update of the underlying modules/libraries is not enough to reinstate the previous behavior of this provider here, given all the upstream changes in the k8s API. |
I'm experiencing similar issues with the kubectl provider on EKS cluster v1.27. Sometimes provider just drops resources from the tf state because it can not find them. When I try to import resources back with
I've managed to find a workaround that works for my use case. I've switched to I've tested this by creating a new cluster from scratch.
New:
|
For anybody stuck, blocked, losing hope for a progress on that - if you are in need of deploying your CRs before CRDs are available then I would suggest to do what I've done for now, eg. before: resource "kubectl_manifest" "envoyfilter-proxy_protocol-internal" {
yaml_body = <<-EOF
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: proxy-protocol-internal
namespace: istio-system
spec:
configPatches:
- applyTo: LISTENER
patch:
operation: MERGE
value:
listener_filters:
- name: envoy.filters.listener.proxy_protocol
- name: envoy.filters.listener.tls_inspector
workloadSelector:
labels:
istio: ingressgateway-internal
EOF
depends_on = [
helm_release.istio-istiod
]
} after: resource "helm_release" "envoyfilter-proxy-protocol-internal" {
name = "envoyfilter-proxy-protocol-internal"
namespace = kubernetes_namespace.istio-system.metadata[0].name
repository = "https://bedag.github.io/helm-charts/"
chart = "raw"
version = "2.0.0"
values = [
<<-EOF
resources:
- apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: proxy-protocol-internal
namespace: istio-system
spec:
configPatches:
- applyTo: LISTENER
patch:
operation: MERGE
value:
listener_filters:
- name: envoy.filters.listener.proxy_protocol
- name: envoy.filters.listener.tls_inspector
workloadSelector:
labels:
istio: ingressgateway-internal
EOF
]
depends_on = [
helm_release.istio-istiod,
kubernetes_namespace.istio-ingress
]
} |
A dirty workaround that works
|
#270 (comment) works great, thanks a ton! |
This looks like a good alternative! I'll give that a shot thanks! |
@alekc thanks a lot. For the record I also had to rename the provider from state :
|
Also as a side note I would advice production user not to use the plugin anymore since it look not maintained. One option I will take is to use terraform to template deployments (ex to s3) and use flux.io to take care of the deployment. This reduces the coupling between T8 and k8 and allow us to keep injecting values from tf to k8 automatically and use t8 templating tools. |
@jodem but you do need a running flux instance isn't it? So you are still out of luck for all the tasks like bootstrapping etc. In theory you could make a use of a https://artifacthub.io/packages/helm/kiwigrid/any-resource helm chart, but it still cranky (and I had my share of. issues with helm provider so I tend to avoid it). P.s. this provider is not actively maintained anymore, the one on my fork is ;) |
I use the helm provided to install flux and I will probably use the kuberentes manifest official ressource to boostrap it (2 manifest, one for the flux S3 source, one to give the s3 path to watch and synchronize). Or I would use your fork only for these 2 elements which limit the impact (today I have 250+ manifest using the plugin, which put my project in danger) |
@jodem I use the helm provider to install Flux and still use the kubectl provider to apply Flux CRs. The official kubernetes provider is terrible at CRDs, unfortunately. It requires cluster access to plan, making a single-pass cluster creation+bootstrap impossible. |
Yup, pretty much my setup. |
Ok I'll stick with your fork Alekc, and in cas of problem it's "just" 2 manifest my ops will have to handle manually, they will survive :) |
Hello, we are experiencing the same issue. I wonder since a solution is already available, if it's the case to open a PR on this repo for including also here the fix. |
This repository has been abandoned and no PR will get approved by the maintainer. |
I had to do something similar. I did try with The only way that I find to make this work everytime I apply my stack was manifests and local-exec:
|
Able to resolve by replacing kubectl_manifest by kubernetes_manifest resource "kubernetes_manifest" "servicemonitor" { |
This issue is mostly related to applying custom CRD manifests - |
for me this happened after provider aws got upgraded from 5.41.0 to 5.42.0. downgrade solved the issue. |
We encountered comparable challenges using the Here is the module link along with the code. https://github.com/aws-ia/terraform-aws-eks-data-addons/tree/main/helm-charts/karpenter-resources You can checkout this example on how to consume this Helm Chart. https://github.com/awslabs/data-on-eks/blob/d0ae18bc8c85ec7e313a9146ec0b1c2a3b8a1550/analytics/terraform/spark-k8s-operator/addons.tf#L174 |
If it helps anyone, in my case I noticed this was happening when I set my |
It solved the problem for quite short time. After 1 or 2 cycles of |
I pivoted too and |
I solved the problem setting |
Solved my issue as well |
The issue
I'm running into an error with karpenter yaml templates and I'm unsure on what the cause is. I've used the
kubectl_manifest
in the past and it worked fine on consecutive applies, but for some reason it's not working with these custom resources.EKS version: 1.27
gavinbunney kubectl version: 1.14
terraform: 1.4.6
Sample YAML file:
When I try to apply it through terraform it will plan just fine, but when I try to apply it, it gives me the following error:
I don't know where to start looking to resolve this. The items never show up in the statefile, but the objects are created inside the cluster. It looks like it created them and tries to find them, but then doesn't seem to find them?
I'll keep digging as this provider is my only way of applying yamls to the cluster through terraform without using
null_resource
( dirty last resort hack imo ) or thekubernetes_manifest
( which doesn't work if the CRDs don't exist ).I'm not sure if this is related to the fact these are custom resources ( through CRDs ).
The text was updated successfully, but these errors were encountered: