Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add publishRate config for RabbitMQ Scaler #1648

Closed
wants to merge 5 commits into from
Closed

Conversation

rwkarg
Copy link
Contributor

@rwkarg rwkarg commented Mar 2, 2021

Adding publishRate config for RabbitMQ scaler. This is mutually exclusive with the existing queueLength config.
The use is for scaling higher throughput scenarios where there is little queue length backlog or the queue length doesn't adequately correlate to the number of instances required to match the workload.

Checklist

  • Commits are signed with Developer Certificate of Origin (DCO - learn more)
  • Tests have been added
  • [-] A PR is opened to update our Helm chart (repo) (if applicable, ie. when deployment manifests are modified)
  • A PR is opened to update the documentation on (repo) (if applicable)
  • Changelog has been updated

Fixes #

pkg/scalers/rabbitmq_scaler.go Outdated Show resolved Hide resolved
pkg/scalers/rabbitmq_scaler.go Show resolved Hide resolved
pkg/scalers/rabbitmq_scaler.go Show resolved Hide resolved
@rwkarg
Copy link
Contributor Author

rwkarg commented Mar 3, 2021

This is working in our clusters, both with a publishRate trigger and both publishRate and queueLength triggers (and the existing queueLength only ScaledObjects continue to work as before).

I'll let this bake overnight and then submit another PR with the correct DCO comment in the commit.

@rwkarg
Copy link
Contributor Author

rwkarg commented Mar 3, 2021

Closing as this is replaced with #1653

@rwkarg rwkarg closed this Mar 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants