Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't count Custom Resources if not enabled at NIC startup #5749

Merged
merged 3 commits into from
Jun 14, 2024
Merged

Conversation

jjngx
Copy link
Contributor

@jjngx jjngx commented Jun 13, 2024

Proposed changes

This PR fixes issue caused by attempts to count custom resources when CRs are disabled.

Checklist

Before creating a PR, run through this checklist and mark each as complete.

  • I have read the CONTRIBUTING doc
  • I have added tests that prove my fix is effective or that my feature works
  • I have checked that all unit tests pass after adding my changes
  • I have updated necessary documentation
  • I have rebased my branch onto main
  • I will ensure my PR is targeting the main branch and pulling from my branch from my own fork

@github-actions github-actions bot added bug An issue reporting a potential bug go Pull requests that update Go code labels Jun 13, 2024
@jjngx jjngx linked an issue Jun 13, 2024 that may be closed by this pull request
@jjngx jjngx marked this pull request as ready for review June 14, 2024 09:19
@jjngx jjngx requested a review from a team as a code owner June 14, 2024 09:19
@AlexFenlon
Copy link
Contributor

Here is some testing -
In main, we get this output when CRDs are disabled
In this test I am currently using the helm value --set controller.enableCustomResources=false \ to disable CRds.
This loop happens when NIC gets to collecting telemetry and fails to do so resulting in this loop
Here is the output in Main -

E0614 09:32:13.150502       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 1721 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1eee0e0, 0x38797d0})
	k8s.io/apimachinery@v0.30.1/pkg/util/runtime/runtime.go:75 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000582c40?})
	k8s.io/apimachinery@v0.30.1/pkg/util/runtime/runtime.go:49 +0x6b
panic({0x1eee0e0?, 0x38797d0?})
	runtime/panic.go:770 +0x132
github.com/nginxinc/kubernetes-ingress/internal/k8s.(*LoadBalancerController).getAllPolicies(0xc0006ca008)
	github.com/nginxinc/kubernetes-ingress/internal/k8s/controller.go:3483 +0xbe
github.com/nginxinc/kubernetes-ingress/internal/telemetry.(*Collector).PolicyCount(0xc0000ed600)
	github.com/nginxinc/kubernetes-ingress/internal/telemetry/cluster.go:170 +0x45
github.com/nginxinc/kubernetes-ingress/internal/telemetry.(*Collector).BuildReport(_, {_, _})
	github.com/nginxinc/kubernetes-ingress/internal/telemetry/collector.go:257 +0x139c
github.com/nginxinc/kubernetes-ingress/internal/telemetry.(*Collector).Collect(0xc0000ed600, {0x26eb370, 0xc0004de410})
	github.com/nginxinc/kubernetes-ingress/internal/telemetry/collector.go:110 +0xd0
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
	k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:259 +0x1f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00093bf08, {0x26c7d60, 0xc000762090}, 0x1, 0xc00034e540)
	k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000087f08, 0x4e94914f0000, 0x3fb999999999999a, 0x1, 0xc00034e540)
	k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x26eb370, 0xc0004de410}, 0xc000087f98, 0x4e94914f0000, 0x3fb999999999999a, 0x1)
	k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:259 +0x93
github.com/nginxinc/kubernetes-ingress/internal/telemetry.(*Collector).Start(...)
	github.com/nginxinc/kubernetes-ingress/internal/telemetry/collector.go:103
github.com/nginxinc/kubernetes-ingress/internal/k8s.(*LoadBalancerController).Run.func3({0x26eb370?, 0xc0004de410?})
	github.com/nginxinc/kubernetes-ingress/internal/k8s/controller.go:745 +0xd7
created by github.com/nginxinc/kubernetes-ingress/internal/k8s.(*LoadBalancerController).Run in goroutine 1
	github.com/nginxinc/kubernetes-ingress/internal/k8s/controller.go:742 +0x61e
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x1c4201e]

while running the same helm build in this branch/pr it works just fine! resulting in the output below as expected!

I0614 09:28:56.826520       1 collector.go:112] Collecting telemetry data
I0614 09:28:57.329037       1 collector.go:163] Telemetry data collected: {Data:{ProjectName:NIC ProjectVersion:3.6.0-SNAPSHOT ProjectArchitecture:amd64 ClusterID:3d306033-5883-4901-9ffe-90db11840349 ClusterVersion:v1.30.0 ClusterPlatform:other InstallationID:79d96e10-d5ea-45c6-9e72-8adfe7e18bf5 ClusterNodeCount:1} NICResourceCounts:{VirtualServers:0 VirtualServerRoutes:0 TransportServers:0 Replicas:1 Secrets:0 ClusterIPServices:2 NodePortServices:0 LoadBalancerServices:1 ExternalNameServices:0 RegularIngressCount:0 MasterIngressCount:0 MinionIngressCount:0 IngressClasses:1 AccessControlPolicies:0 RateLimitPolicies:0 JWTAuthPolicies:0 BasicAuthPolicies:0 IngressMTLSPolicies:0 EgressMTLSPolicies:0 OIDCPolicies:0 WAFPolicies:0 GlobalConfiguration:false IngressAnnotations:[] AppProtectVersion: IsPlus:true InstallationFlags:[-nginx-plus=true -nginx-reload-timeout=60000 -enable-app-protect=false -enable-app-protect-dos=false -nginx-configmaps=default/my-release-nginx-ingress -ingress-class=nginx -health-status=false -health-status-uri=/nginx-health -nginx-debug=false -v=3 -nginx-status=true -nginx-status-port=8080 -nginx-status-allow-cidrs=127.0.0.1 -report-ingress-status -external-service=my-release-nginx-ingress-controller -enable-leader-election=true -leader-election-lock-name=nginx-ingress-leader -enable-prometheus-metrics=true -prometheus-metrics-listen-port=9113 -prometheus-tls-secret= -enable-service-insight=false -service-insight-listen-port=9114 -service-insight-tls-secret= -enable-custom-resources=false -enable-snippets=true -include-year=false -disable-ipv6=false -ready-status=true -ready-status-port=8081 -enable-latency-metrics=false -ssl-dynamic-reload=true -enable-telemetry-reporting=true -weight-changes-dynamic-reload=false]}}

@jjngx jjngx merged commit 46120dd into main Jun 14, 2024
85 checks passed
@jjngx jjngx deleted the fix/cr branch June 14, 2024 11:06
ssrahul96 pushed a commit to ssrahul96/kubernetes-ingress that referenced this pull request Jun 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue reporting a potential bug go Pull requests that update Go code
Projects
Status: Done 🚀
Development

Successfully merging this pull request may close these issues.

Error collecting telemetry when custom-resources flag is turned off
3 participants