Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic in redisScaler #4197

Closed
veep opened this issue Feb 4, 2023 · 3 comments · Fixed by #4199
Closed

panic in redisScaler #4197

veep opened this issue Feb 4, 2023 · 3 comments · Fixed by #4199
Assignees
Labels
bug Something isn't working

Comments

@veep
Copy link

veep commented Feb 4, 2023

Report

Running a brand new keda 2.9.3 install (my first keda install) on EKS, monitoring many redis celery queues, and getting occasional panics in redisScaler.

I believe it is happening only when I am applying changes to the scaler configuration (though I was making no changes to the redis trigger.)

Expected Behavior

No panics

Actual Behavior

Rarely, during application of updates to a ScaledObject with a redis trigger, I would get the panic. The operator would restart fine after a few seconds.

Steps to Reproduce the Problem

  1. Run a lot of simple redis ScalerObjects
  2. Apply minor changes to a lot of them

Logs from KEDA operator

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x5eeea0]

goroutine 20089 [running]:
github.com/go-logr/logr.Logger.Error({{0x0?, 0x0?}, 0x2d6241e?}, {0x40930a0, 0xc000029c80}, {0x396abad, 0x19}, {0x0, 0x0, 0x0})
	/workspace/vendor/github.com/go-logr/logr/logr.go:279 +0x80
github.com/kedacore/keda/v2/pkg/scalers.(*redisScaler).GetMetricsAndActivity(0xc01a6b8540, {0x40bf988?, 0xc01abfcd00?}, {0xc019c75b40, 0xc})
	/workspace/pkg/scalers/redis_scaler.go:266 +0x1a5
github.com/kedacore/keda/v2/pkg/scaling/cache.(*ScalersCache).GetScaledObjectState(0xc01a6b85c0, {0x40bf988, 0xc01abfcd00}, 0xc01bdd2000)
	/workspace/pkg/scaling/cache/scalers_cache.go:136 +0x6fa
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers(0xc000e97ef0, {0x40bf988, 0xc01abfcd00}, {0x3873b80?, 0xc01bdd2000?}, {0x40acd18, 0xc019c74928})
	/workspace/pkg/scaling/scale_handler.go:360 +0x510
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop(0xc000e97ef0, {0x40bf988, 0xc01abfcd00}, 0xc019513680, {0x3873b80, 0xc01bdd2000}, {0x40acd18, 0xc019c74928})
	/workspace/pkg/scaling/scale_handler.go:162 +0x32a
created by github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).HandleScalableObject
	/workspace/pkg/scaling/scale_handler.go:118 +0x71d

KEDA Version

2.9.3

Kubernetes Version

1.24

Platform

Amazon Web Services

Scaler Details

redis

Anything else?

No response

@veep veep added the bug Something isn't working label Feb 4, 2023
@JorTurFer
Copy link
Member

Hi,
Thanks for reporting, it seems that the logger isn't properly assigned IDK why (yet).

@JorTurFer
Copy link
Member

Okey, I found the issue, the logger isn't properly assigned during the scaler generation and on any error (which should be just logged) the operator panics.

@JorTurFer
Copy link
Member

Hey @veep
The issue has been fixed, it'll be included in the next release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants