Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconcile resources #240

Closed
wants to merge 3 commits into from
Closed

Conversation

Boomatang
Copy link
Contributor

closes #234

Reconcile changes made to the AuthConfig and Limitador CRs to reflect the configuration in AuthPolicy and RateLimitPolicy. Multiple rate limit policies can be configured in the on Limitador CR but it is only one rate limit policy per resource (http Route & gateway)

Verification

Setup

  • Install kuadrant operator using make local-setup
  • Set up to gateway, routes and etc, etc
echo '
---
apiVersion: kuadrant.io/v1beta1
kind: Kuadrant
metadata:
  name: kuadrant-sample
spec: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: toystore
  labels:
    app: toystore
spec:
  selector:
    matchLabels:
      app: toystore
  template:
    metadata:
      labels:
        app: toystore
    spec:
      containers:
        - name: toystore
          image: quay.io/3scale/authorino:echo-api
          env:
            - name: PORT
              value: "3000"
          ports:
            - containerPort: 3000
              name: http
  replicas: 1
---
apiVersion: v1
kind: Service
metadata:
  name: toystore
spec:
  selector:
    app: toystore
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 3000
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: toystore
  labels:
    app: toystore
spec:
  parentRefs:
    - name: istio-ingressgateway
      namespace: istio-system
  hostnames: ["*.toystore.com"]
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: "/toy"
          method: GET
      backendRefs:
        - name: toystore
          port: 80
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: dollstore
  labels:
    app: dollstore
spec:
  parentRefs:
    - name: istio-ingressgateway
      namespace: istio-system
  hostnames: ["*.toystore.com"]
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: "/doll"
          method: GET
      backendRefs:
        - name: toystore
          port: 80
---
apiVersion: v1
kind: Secret
metadata:
  annotations:
    secret.kuadrant.io/user-id: bob
  name: bob-key
  labels:
    authorino.kuadrant.io/managed-by: authorino
    app: toystore
stringData:
  api_key: IAMBOB
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  annotations:
    secret.kuadrant.io/user-id: alice
  name: alice-key
  labels:
    authorino.kuadrant.io/managed-by: authorino
    app: toystore
stringData:
  api_key: IAMALICE
type: Opaque 
---
' | kubectl apply -f -

AuthPolicy Reconcile

  • create authPolicy
echo '
apiVersion: kuadrant.io/v1beta1
kind: AuthPolicy
metadata:
  name: toystore
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: toystore
  rules:
  - paths: ["/toy*"]
  authScheme:
    identity:
    - name: friends
      apiKey:
        allNamespaces: true
        selector:
          matchLabels:
            app: toystore
      credentials:
        in: authorization_header
        keySelector: APIKEY
    response:
    - json:
        properties:
          - name: userID
            value: null
            valueFrom:
              authJSON: auth.identity.metadata.annotations.secret\.kuadrant\.io/user-id
      name: rate-limit-apikey
      wrapper: envoyDynamicMetadata
      wrapperKey: ext_auth_data
' | kubectl apply -f -
  • wait for reconcile to complete
watch kubectl get AuthPolicy toystore -o jsonpath='{.status.conditions[].status}'

Expect: True

  • change part of the AuthConfig spec that was define as part of the AuthPolicy. Good example is spec.identity.apikey.name = friend
kubectl edit AuthConfig ap-default-toystore  
  • Check that the changes have being reverted.
kubectl get AuthConfig ap-default-toystore -o yaml
  • Expect changes should be reverted.
  • Also not in the annotations a back reference to the authPolicy has being set.

RateLimitPolicy Reconcile

RateLimitPolicy at gateway level

  • Create a RateLimitPolicy on the gateway
echo "
apiVersion: kuadrant.io/v1beta2
kind: RateLimitPolicy
metadata:
  name: toystore
  namespace: istio-system
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: istio-ingressgateway
  limits:
    toystore:
      rates:
        - limit: 5
          duration: 10
          unit: second
      counters:
        - context.request.http.path
" | kubectl apply -f -
  • wait for reconcile to complete
watch kubectl get RateLimitPolicy toystore -o jsonpath='{.status.conditions[].status}' -n istio-system

Expect: True

  • Modify part of the limitador CR, example change the max_value from 5 to 1.
kubectl edit limitador limitador -n default
  • Expect the value to be reverted back
  • Note the a reference added to the annotations for the rateLimitPolicy

RateLimitPolicy at http Route level

  • Create RateLimitPolicies at the route level
echo '
---
apiVersion: kuadrant.io/v1beta2
kind: RateLimitPolicy
metadata:
  name: toystore
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: toystore
  limits:
    toystore:
      routeSelectors:
        - hostnames:
          - "*.toystore.com"
          matches:
            - path:
                type: PathPrefix
                value: "/toy"
              method: GET
      rates:
        - limit: 5
          duration: 10
          unit: second
      counters:
        - context.request.http.path
---
apiVersion: kuadrant.io/v1beta2
kind: RateLimitPolicy
metadata:
  name: dollstore
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: dollstore
  limits:
    toystore:
      routeSelectors:
        - hostnames:
          - "*.toystore.com"
          matches:
            - path:
                type: PathPrefix
                value: "/dolls"
              method: GET
      rates:
        - limit: 1
          duration: 5
          unit: second
      counters:
        - context.request.http.path
' | kubectl apply -f - 
  • once again view the limitador CR
  • Notice the different reference in the annotations.
  • Edit some parts of the spec
  • Expect the edits to be reverted.
  • Start removing the ratelimitPolicies
  • Expect: the annotations should only list the current RateLimitPolicies.
  • Once all RateLimitPolicies have being removed from the cluster the reference annotation should be fully removed.
    • NOTE: the limitador CR will not be removed. This needs to be discussed and is out of scope for this work.

Adds a new watch to check for changes in the Limitador. If there are changes we reconcile back to the expected configuration as stated in the rate limit polices (RPL).
- Valid for RPL attached to httpRoute
- Valid for RPL attached to gateway
Adds a new watch to check for changes in the AuthConfig. If there are changes we reconcile back to the expected configuration as stated in the AuthPolicy.
@Boomatang Boomatang requested a review from a team as a code owner August 28, 2023 15:47
@codecov
Copy link

codecov bot commented Aug 28, 2023

Codecov Report

Merging #240 (1553fac) into main (0adfe0c) will decrease coverage by 2.21%.
Report is 1 commits behind head on main.
The diff coverage is 55.29%.

@@            Coverage Diff             @@
##             main     #240      +/-   ##
==========================================
- Coverage   64.88%   62.68%   -2.21%     
==========================================
  Files          33       37       +4     
  Lines        3224     3393     +169     
==========================================
+ Hits         2092     2127      +35     
- Misses        970     1075     +105     
- Partials      162      191      +29     
Flag Coverage Δ
integration 68.05% <65.68%> (-3.96%) ⬇️
unit 57.68% <39.70%> (-0.73%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
api/v1beta1 (u) ∅ <ø> (∅)
pkg/common (u) 73.46% <0.00%> (-3.53%) ⬇️
pkg/istio (u) 29.69% <ø> (ø)
pkg/log (u) 31.81% <ø> (ø)
pkg/reconcilers (u) 33.68% <ø> (ø)
pkg/rlptools (u) 60.39% <90.00%> (+2.76%) ⬆️
controllers (i) 68.05% <65.68%> (-3.96%) ⬇️
Files Changed Coverage Δ
pkg/common/annotations.go 0.00% <0.00%> (ø)
pkg/common/back_reference.go 0.00% <0.00%> (ø)
controllers/ratelimitpolicy_limits.go 60.68% <48.88%> (-7.81%) ⬇️
controllers/authconfig_eventmapper.go 72.72% <72.72%> (ø)
controllers/limitador_eventmapper.go 73.91% <73.91%> (ø)
controllers/ratelimitpolicy_controller.go 68.32% <75.00%> (-8.19%) ⬇️
pkg/rlptools/utils.go 96.10% <90.00%> (-3.90%) ⬇️
controllers/authpolicy_auth_config.go 80.19% <100.00%> (+1.03%) ⬆️
controllers/authpolicy_controller.go 73.88% <100.00%> (-1.12%) ⬇️

... and 5 files with indirect coverage changes

Comment on lines 6 to 7
authorinoapi "github.com/kuadrant/authorino/api/v1beta1"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
authorinoapi "github.com/kuadrant/authorino/api/v1beta1"
authorinoapi "github.com/kuadrant/authorino/api/v1beta1"

@@ -216,7 +216,13 @@ func (r *RateLimitPolicyReconciler) deleteResources(ctx context.Context, rlp *ku
}
}

// update annotation of policies afftecting the gateway
// remove direct back ref from limitador CR
err = r.deleteLimitadorBackReference(ctx, rlp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. So deleteLimitsreconcileLimitadorreconcileLimitadorBackRef adds the RLP ref to the back-ref annotation and performs an update on the Limitador resource, then we call deleteLimitadorBackReference, which removes the same RLP ref from the annotation and updates the Limitador resource again?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is correct and I should explain my reasoning. The reconcileLimitador function runs no matter what action is being carried out let it be create, update or delete. The label removal logic could be in the reconcileLimitadorBackRef but doing this would mean having recreate the logic for checking if the resource has being deleted. As it is the deleteResources --> deleteLimitadorBackReference only get called when a RLP is deleted there was no need to any more checks in the deleteLimitadorBackReference. I am aware there are more api calls with what I have done but as its only on deletion and not every reconcile I felt the trade off was worth it.

Do you want me to move the label deletion logic to the reconcileLimitadorBackRef?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want me to move the label deletion logic to the reconcileLimitadorBackRef?

I would wait. As I said in my other comment, if we're going to watch and reconcile changes on all derived resources – the Limitador CR is just one of then! –, I see more of this pattern of annotating/de-annotating ahead. I think we can aim for more generic functions and types that we can reuse in more places.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the community call my understanding is we wont be using the https://github.com/Kuadrant/gateway-api-machinery till after MVP. So I thinking I address the other issue raised now and create an issue to follow up with a refactor of the labelling once we can use the gateway-api-machinery. Would you be happy with this approach?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't be complete anyway until we cover for all kinds of resources that, once modified directly, can break with the source of truth. I understand that whereas one is a refactoring, while the other prevents what could be perceived as a bug, I still think those two things are intimately related and better if tackled together. I'm also afraid of the explosion of annotations while others seem to be addressing the issue differently. IOW, let's not amend it quickly due to release pressure and do this the right way. The fix can be part of a post-release patch if we want.

func (r *RateLimitPolicyReconciler) deleteLimitadorBackReference(ctx context.Context, policy client.Object) error {
policyKey := client.ObjectKeyFromObject(policy)

limitadorList, err := r.listLimitadorByNamespace(ctx, policyKey.Namespace)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Limitador CR is not necessarily in the same namespace as the RLP:

kuadrantNamespace, isSet := common.GetKuadrantNamespaceFromPolicy(rlp)

Copy link
Contributor

@guicassolato guicassolato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding and removing a policy ref from the back-ref annotation of a Kubernetes resource is a pattern we (will) repeat multiple times if we want to map all events related to these resources back to triggering the reconciliation of possibly affected policies.

In https://github.com/Kuadrant/gateway-api-machinery, we tried to define types that are a bit more generic, so they can be reused to manage back-ref annotations for different kinds of policies (e.g. Gateway, HTTPRoute). Perhaps we could try going one step further there and also generalise for the cases where any/multiple arbitrary kinds of resource are annotated with back refs of a same (generic) kind of policy (e.g. Limitador, WasmPlugin, and EnvoyFilter for the RateLimitPolicy).

Either way, I see an opportunity here to apply patterns from https://github.com/Kuadrant/gateway-api-machinery, to coordinate with changes to that other repo if needed, while addressing #203.

cc @KevFan

Copy link
Contributor

@eguzki eguzki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that adding annotations with all the RLPs to the Limitador CR is something we will be doing the sooner or later. Nothing against that. Totally in favor of that.

However, monitoring the Limtiador CR and generating a request event for each RLP instance is not necessary and that rings some bell. I think that the source issue is not the implementation of this PR but the implementation of the reconciliation of the Limitador CR's limits. That raises an out of scope discussion.

I think that a bigger refactor is needed. But meanwhile, being less that sub-optimal, I cannot come up with a better approach.

Last, but not least, what about the other manager resources like EnvoyFilter, WasmPlugin, AuthorizationPolicy from Istio??


requests := make([]reconcile.Request, 0)
for _, ref := range refs {
m.Logger.V(1).Info("MapRateLimitPolicy", "ratelimitpolicy", ref)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current implementation, you do not need to trigger one event per RLP, only one of them is enough to reconcile the entire Limitador CR. Each rate limit policy event is reading all gateways from the cluster and iterating over all the RLP references in the annotations of the gateways.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I many explore this. What your saying is making sense but my mind is not allowing me to accept it as true. 😃 I will add any changes as needed.

@@ -68,8 +69,12 @@ func (r *RateLimitPolicyReconciler) reconcileLimitador(ctx context.Context, rlp
return err
}

// return if limitador is up to date
if rlptools.Equal(rateLimitIndex.ToRateLimits(), limitador.Spec.Limits) {
updated, err := r.reconcileLimitadorBackRef(limitador, rlp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this is wrong. Reconciliation is not about adding. Is about comparing existing and desired. And if they differ, setting existing as specified in desired. Thus, this method should get the list of RLPs that are wanted to be in the annotation item. Then compare with the existing and update if not equal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I see how this is would be wrong from comparing existing and desired. In my mind the reconcile loops were about eventually consistency which requires adding and rescheduling. This comes from how the RHOAM operator was working. I am happy to refactor this to match better with the comparing of existing and desired approach.

return err
}
// return if limitador is up-to-date
if rlptools.Equal(rateLimitIndex.ToRateLimits(), limitador.Spec.Limits) && !updated {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The usual pattern would be:

updated := false
backRefUpdated, err := r.reconcileLimitadorBackRef(limitador, desired_rlps)
if err != nil {
    return err
}

updated = updated || backRefUpdated

limitsUpdated, err := r.reconcileLimits(ctx, limitador, rlpRefs)
if err != nil {
    return err
}

updated = updated || limitsUpdated

if updated {
    err = r.UpdateResource(ctx, limitador)
	logger.V(1).Info("update limitador", "limitador", limitadorKey, "err", err)
	if err != nil {
		return err
	}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to refactor

@@ -108,3 +113,72 @@ func (r *RateLimitPolicyReconciler) buildRateLimitIndex(ctx context.Context, rlp

return rateLimitIndex, nil
}

func (r *RateLimitPolicyReconciler) reconcileLimitadorBackRef(limitador *limitadorv1alpha1.Limitador, policyKey client.Object) (updated bool, err error) {
policy := client.ObjectKeyFromObject(policyKey)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has been already implemented in https://github.com/Kuadrant/kuadrant-operator/blob/main/pkg/common/gatewayapi_utils.go#L411-L449
We should look a way to reuse. Ultimately implemented in https://github.com/Kuadrant/gateway-api-machinery

Again, if this is about adding a back ref, call the method AddBackrefToLimitador. Reconciliation is something else.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did see the function in the gatewayapi_utits. At the time I was in two minds about doing the refactor. The original method is tied a struct and not reusable as is, and the method logic is straight forward. Personal I have no problem with repeating code twice. More than that I do think it should be generalized.

I do think this labelling procedure will be moved to the gateway-api-machinery but that most likely will not happen before MVP at this stage.

Yes, naming is hard. AddBackrefToLimitador is a better name. I will look to rename this while doing the refactor to Existing and Desired design mention earlier.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can fix all DRY issues when depending on gateway-api-machinery lib

func (r *RateLimitPolicyReconciler) deleteLimitadorBackReference(ctx context.Context, policy client.Object) error {
policyKey := client.ObjectKeyFromObject(policy)

limitadorList, err := r.listLimitadorByNamespace(ctx, policyKey.Namespace)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why list limitador instances by namespace? The limitador instance name/namespace is well known reading RLP or gateway annotations. The name is hard-coded (for now but will change in the future) as limitador here and the namespace is computed here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the name space first, Gui mention this and I have the change locally. As to why list the instances. I have being burned by hard-coded values changing to dynamic values a few times. So now I believe it to be more robust to use list over get. I am happy to refactor if you think it should but was my reasoning.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By design, one RLP will be adding configuration to only one Limitador CR. At least, for now.

@Boomatang
Copy link
Contributor Author

I think that a bigger refactor is needed. But meanwhile, being less that sub-optimal, I cannot come up with a better approach.

I tend to agree with this. The use of watches and owns can spiral out of control. I am not a fan of controllers and I think the use of informers is the better approach. As you said this is a large discussion which is out of scope here.

@Boomatang Boomatang closed this Nov 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Configurations not reconciled by Kuadrant operator
3 participants