-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make interceptor Cluster bound instead of namespace bound #240
Comments
This will be done through #183 where it will become multi-tenant. |
@tomkerkhove actually, the interceptor after #183 (associated PR is #206) only works in the same namespace it's running in. that's because it forwards requests to @nilesh93 we can make fairly simple changes to interceptor, scaler and HTTP Addon operator to respect a the result would be that interceptor, scaler, addon operator and keda operator would still only work with one namespace at a time, but you could run many of them all in the same admin namespace, and configure each with a different target namespace (the same would go for scaler, addon operator, and keda operator). would that solution be workable for you? |
note that this interacts with #241. if the operator were to create new interceptors and scalers on submission of a new |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
Interceptor should be able to run in an admin namespace (eg: keda-http) and should be able to forward traffic to multiple services in multiple namespaces
Use-Case
In a scenario where, there are multiple services in multiple namespaces in a given cluster, sometimes it could be costly to add a keda http interceptor per namespace. Rather, just like how ingress controller works, it would be best if the interceptor can reside in an admin namespace and forward traffic to multiple services in multiple namespaces.
The text was updated successfully, but these errors were encountered: