Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flagger omits TrafficSplit backend service weight if weight is 0 due to omitempty option #930

Closed
johnsonshi opened this issue Jun 8, 2021 · 13 comments · Fixed by #934
Closed

Comments

@johnsonshi
Copy link
Contributor

Describe the bug

Since OSM is supported (SMI support added in #896), I did the following to create a canary deploy using OSM and Flagger.
As recommended in #896, I used the MetricsTemplate CRDs to create the required Prometheus custom metrics (request-success-rate and request-duration).

I then created a canary custom resource for podinfo deployment, however it does not succeed. It says that the canary custom resource cannot create a TrafficSplit resource for the canary deployment.

Output excerpt of kubectl describe -f ./podinfo-canary.yaml:

Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2021-06-07T22:28:21Z
    Last Update Time:      2021-06-07T22:28:21Z
    Message:               New Deployment detected, starting initialization.
    Reason:                Initializing
    Status:                Unknown
    Type:                  Promoted
  Failed Checks:           0
  Iterations:              0
  Last Transition Time:    2021-06-07T22:28:21Z
  Phase:                   Initializing
Events:
  Type     Reason  Age                  From     Message
  ----     ------  ----                 ----     -------
  Warning  Synced  5m38s                flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  8s (x12 over 5m38s)  flagger  all the metrics providers are available!
  Warning  Synced  8s (x11 over 5m8s)   flagger  TrafficSplit podinfo.test create error: the server could not find the requested resource (post trafficsplits.split.smi-spec.io)


To Reproduce

./kustomize/osm/kustomization.yaml:

namespace: osm-system
bases:
  - ../base/flagger/
patchesStrategicMerge:
  - patch.yaml

./kustomize/osm/patch.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flagger
spec:
  template:
    spec:
      containers:
        - name: flagger
          args:
            - -log-level=info
            - -include-label-prefix=app.kubernetes.io
            - -mesh-provider=smi:v1alpha3
            - -metrics-server=http://osm-prometheus.osm-system.svc:7070

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: flagger
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flagger
subjects:
  - kind: ServiceAccount
    name: flagger
    namespace: osm-system

Used MetricTemplate CRD to implement required custom metric (recommended in #896) - request-success-rate.yaml:

apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
  name: request-success-rate
  namespace: osm-system
spec:
  provider:
    type: prometheus
    address: http://osm-prometheus.osm-system.svc:7070
  query: |
    sum(
        rate(
            osm_request_total{
              destination_namespace="{{ namespace }}",
              destination_name="{{ target }}",
              response_code!="404"
            }[{{ interval }}]
        )
    )
    /
    sum(
        rate(
            osm_request_total{
              destination_namespace="{{ namespace }}",
              destination_name="{{ target }}"
            }[{{ interval }}]
        )
    ) * 100

Used MetricTemplate CRD to implement required custom metric (recommended in #896) - request-duration.yaml:

apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
  name: request-duration
  namespace: osm-system
spec:
  provider:
    type: prometheus
    address: http://osm-prometheus.osm-system.svc:7070
  query: |
    histogram_quantile(
      0.99,
      sum(
        rate(
          osm_request_duration_ms{
            destination_namespace="{{ namespace }}",
            destination_name=~"{{ target }}"
          }[{{ interval }}]
        )
      ) by (le)
    )

podinfo-canary.yaml:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  provider: "smi:v1alpha3"
  # deployment reference
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  # HPA reference (optional)
  autoscalerRef:
    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    name: podinfo
  # the maximum time in seconds for the canary deployment
  # to make progress before it is rollback (default 600s)
  progressDeadlineSeconds: 60
  service:
    # ClusterIP port number
    port: 9898
    # container port number or name (optional)
    targetPort: 9898
  analysis:
    # schedule interval (default 60s)
    interval: 30s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 50
    # canary increment step
    # percentage (0-100)
    stepWeight: 5
    # Prometheus checks
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      thresholdRange:
        min: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      thresholdRange:
        max: 500
      interval: 30s
    # testing (optional)
    webhooks:
      - name: acceptance-test
        type: pre-rollout
        url: http://flagger-loadtester.test/
        timeout: 30s
        metadata:
          type: bash
          cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
      - name: load-test
        type: rollout
        url: http://flagger-loadtester.test/
        metadata:
          cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/"

Output excerpt of kubectl describe -f ./podinfo-canary.yaml:

Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2021-06-07T22:28:21Z
    Last Update Time:      2021-06-07T22:28:21Z
    Message:               New Deployment detected, starting initialization.
    Reason:                Initializing
    Status:                Unknown
    Type:                  Promoted
  Failed Checks:           0
  Iterations:              0
  Last Transition Time:    2021-06-07T22:28:21Z
  Phase:                   Initializing
Events:
  Type     Reason  Age                  From     Message
  ----     ------  ----                 ----     -------
  Warning  Synced  5m38s                flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  8s (x12 over 5m38s)  flagger  all the metrics providers are available!
  Warning  Synced  8s (x11 over 5m8s)   flagger  TrafficSplit podinfo.test create error: the server could not find the requested resource (post trafficsplits.split.smi-spec.io)

Full output of kubectl describe -f ./podinfo-canary.yaml: https://pastebin.ubuntu.com/p/kB9qtPxZvr/



Expected behavior

A clear and concise description of what you expected to happen.

Additional context

  • Flagger version: 1.11.0
  • Kubernetes version: 1.19.11
  • Service Mesh provider: smi (through osm)
  • Ingress provider: N/A.
@stefanprodan
Copy link
Member

Can you please post here kubectl get crd trafficsplit -oyaml

@johnsonshi
Copy link
Contributor Author

~ » kubectl get crds                                                                                                         130 ↵ johnsonshi@OSOC-DLAPT-21
NAME                                    CREATED AT
alertproviders.flagger.app              2021-06-07T22:17:45Z
canaries.flagger.app                    2021-06-07T22:17:45Z
healthstates.azmon.container.insights   2021-06-07T21:42:27Z
httproutegroups.specs.smi-spec.io       2021-06-07T22:06:05Z
metrictemplates.flagger.app             2021-06-07T22:17:45Z
tcproutes.specs.smi-spec.io             2021-06-07T22:06:05Z
trafficsplits.split.smi-spec.io         2021-06-07T22:06:06Z
traffictargets.access.smi-spec.io       2021-06-07T22:06:05Z
udproutes.specs.smi-spec.io             2021-06-07T22:06:05Z
------------------------------------------------------------------------------------------------------------------------------------------------------------
~ » kubectl get crd trafficsplits -oyaml                                                                                           johnsonshi@OSOC-DLAPT-21
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "trafficsplits" not found
------------------------------------------------------------------------------------------------------------------------------------------------------------
~ » kubectl get crd trafficsplits.split.smi-spec.io -o yaml                                                                    1 ↵ johnsonshi@OSOC-DLAPT-21
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: "2021-06-07T22:06:06Z"
  generation: 1
  name: trafficsplits.split.smi-spec.io
  resourceVersion: "4187"
  selfLink: /apis/apiextensions.k8s.io/v1/customresourcedefinitions/trafficsplits.split.smi-spec.io
  uid: 7ecc51f3-dd8e-4e53-b3c8-9b2d9491fa73
spec:
  conversion:
    strategy: None
  group: split.smi-spec.io
  names:
    kind: TrafficSplit
    listKind: TrafficSplitList
    plural: trafficsplits
    shortNames:
    - ts
    singular: trafficsplit
  preserveUnknownFields: true
  scope: Namespaced
  versions:
  - additionalPrinterColumns:
    - description: The apex service of this split.
      jsonPath: .spec.service
      name: Service
      type: string
    name: v1alpha2
    schema:
      openAPIV3Schema:
        properties:
          spec:
            properties:
              backends:
                description: The backend services of this split.
                items:
                  properties:
                    service:
                      description: Name of the Kubernetes service.
                      type: string
                    weight:
                      description: Traffic weight value of this backend.
                      type: number
                  required:
                  - service
                  - weight
                  type: object
                type: array
              service:
                description: The apex service of this split.
                type: string
            required:
            - service
            - backends
            type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: TrafficSplit
    listKind: TrafficSplitList
    plural: trafficsplits
    shortNames:
    - ts
    singular: trafficsplit
  conditions:
  - lastTransitionTime: "2021-06-07T22:06:06Z"
    message: 'spec.versions[0].schema.openAPIV3Schema.type: Required value: must not
      be empty at the root'
    reason: Violations
    status: "True"
    type: NonStructuralSchema
  - lastTransitionTime: "2021-06-07T22:06:06Z"
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: "2021-06-07T22:06:06Z"
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  storedVersions:
  - v1alpha2
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~ »

@johnsonshi
Copy link
Contributor Author

Thanks @stefanprodan for working with me on this. We (the OSM team) are trying to run a successful end-to-end demo for integrating OSM with Flagger. After our successful demo, we plan to contribute to the Flagger repo (for the OSM custom metrics + kustomization files required + other changes necessary in the Flagger codebase) to have support for OSM <> Flagger.

I think this is the only thing blocking our E2E demo (TrafficSplits not being created successfully). Appreciate the help @stefanprodan.

ref: openservicemesh/osm#1700
cc: @draychev + @michelleN + @phillipgibson

@stefanprodan
Copy link
Member

Looks like you have v1alpha2 but in the Flagger Canary you've set it to v1alpha3...

@stefanprodan
Copy link
Member

Also please rename the metric templates and add osm- as prefix, those names are reserved for the builtin metrics.

@johnsonshi
Copy link
Contributor Author

Ok thanks @stefanprodan for pointing out that the SMI versions were mismatched. I'll try it again and let you know.

@johnsonshi
Copy link
Contributor Author

Tried it with the correct version and yet it still isn't working - I'll paste the diagnostics.

@johnsonshi
Copy link
Contributor Author

Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2021-06-13T23:14:43Z
    Last Update Time:      2021-06-13T23:14:43Z
    Message:               New Deployment detected, starting initialization.
    Reason:                Initializing
    Status:                Unknown
    Type:                  Promoted
  Failed Checks:           0
  Iterations:              0
  Last Transition Time:    2021-06-13T23:14:43Z
  Phase:                   Initializing
Events:
  Type     Reason  Age                From     Message
  ----     ------  ----               ----     -------
  Warning  Synced  52s                flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  23s (x2 over 53s)  flagger  all the metrics providers are available!
  Warning  Synced  22s                flagger  TrafficSplit podinfo.test create error: TrafficSplit.split.smi-spec.io "podinfo" is invalid: spec.backends.weight: Required value

The TrafficSplit versions are both identical (v1alpha2).

~/Dev/flagger(main*) » kubectl describe canary/podinfo -n test | grep v1alpha                                                                                                                                      johnsonshi@OSOC-DLAPT-21

  Provider:                   smi:v1alpha2
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(main*) » kubectl get crd trafficsplits.split.smi-spec.io -o yaml | grep v1alpha                                                                                                                      johnsonshi@OSOC-DLAPT-21
    name: v1alpha2
  - v1alpha2

@johnsonshi
Copy link
Contributor Author

@stefanprodan:

@michelleN and I were able to figure out why the error message TrafficSplit podinfo.test create error: TrafficSplit.split.smi-spec.io "podinfo" is invalid: spec.backends.weight: Required value is being emitted.


In the Flagger codebase, the TrafficSplitBackend struct is defined here. Note how Service and Weight have omitempty.

// TrafficSplitBackend defines a backend
type TrafficSplitBackend struct {
Service string `json:"service,omitempty"`
Weight int `json:"weight,omitempty"`
}


According to Golang's JSON Marshall docs at https://golang.org/pkg/encoding/json/#Marshal,
The "omitempty" option specifies that the field should be omitted from the encoding if the field has an empty value, defined as false, 0, a nil pointer, a nil interface value, and any empty array, slice, map, or string.


The omitempty option causes issues when the TrafficSplit resource is created as part of the canary deployment. Because the canary service's backend weight is zero and because Weight has the omitempty option, when the struct is Marshalled as JSON, Golang will omit the Weight field for canary as it has a zero value.


Take note the TrafficSplit JSON struct is created here at this line. Note that the TrafficSplit backend service weight spec for canary is 0 and created here.

Backends: []smiv1alpha2.TrafficSplitBackend{
{
Service: canaryName,
Weight: 0,
},


The error is being emitted at this line when the smi v1alpha2 router attempts to create the TrafficSplit custom resource. JSON marshalling of the struct results in the canary service weight being omitted (due to omitempty), so the error about missing backend service weight is thrown.

_, err := sr.smiClient.SplitV1alpha2().TrafficSplits(canary.Namespace).Create(context.TODO(), t, metav1.CreateOptions{})
if err != nil {
return fmt.Errorf("TrafficSplit %s.%s create error: %w", apexName, canary.Namespace, err)
}


As can be seen in the smi go sdk repo, both Service and Weight (for TrafficSplitBackend) do not have the omitempty option. @michelleN mentioned that the smi go sdk previously had this issue and was fixed in this PR: servicemeshinterface/smi-sdk-go@b365b06

@johnsonshi johnsonshi changed the title Flagger cannot create TrafficSplit when creating canary custom resource using OSM SMI Flagger omits TrafficSplit backend service weight if weight is 0 due to omitempty option Jun 13, 2021
@johnsonshi
Copy link
Contributor Author

Removed omitempty and followed the development guide steps (built new image, pushed to docker hub, scaled down flagger, set the flagger image to my dockerhub image, then scaled up flagger).

After deploying the canary deployment (kubectl apply -f ./podinfo-canary.yaml), here's the output:

~/Dev/flagger(flagger-osm-demo*) » kubectl describe -f ./podinfo-canary.yaml
...
Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2021-06-14T01:36:18Z
    Last Update Time:      2021-06-14T01:36:18Z
    Message:               Deployment initialization completed.
    Reason:                Initialized
    Status:                True
    Type:                  Promoted
  Failed Checks:           0
  Iterations:              0
  Last Applied Spec:       99dc84b6f
  Last Transition Time:    2021-06-14T01:36:18Z
  Phase:                   Initialized
  Tracked Configs:
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  3m51s                  flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  3m21s (x2 over 3m51s)  flagger  all the metrics providers are available!
  Normal   Synced  3m21s                  flagger  Initialization done! podinfo.test

So now we've resolved the invalid TrafficSplit smi spec issue.


However, after triggering a canary deployment by updating the container image, we get another issue:

# Trigger a canary deployment by updating the container image:
$ kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
...
$ kubectl -n test describe canary/podinfo
...
    Name:         podinfo
Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2021-06-14T01:44:18Z
    Last Update Time:      2021-06-14T01:44:18Z
    Message:               Canary analysis failed, Deployment scaled to zero.
    Reason:                Failed
    Status:                False
    Type:                  Promoted
  Failed Checks:           0
  Iterations:              0
  Last Applied Spec:       5f8fd4f546
  Last Transition Time:    2021-06-14T01:44:18Z
  Phase:                   Failed
  Tracked Configs:
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  8m42s                  flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  8m12s (x2 over 8m42s)  flagger  all the metrics providers are available!
  Normal   Synced  8m12s                  flagger  Initialization done! podinfo.test
  Normal   Synced  3m12s                  flagger  New revision detected! Scaling up podinfo.test
  Warning  Synced  2m42s                  flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.48:42134->10.0.108.37:80: read: connection reset by peer
  Warning  Synced  2m12s                  flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.48:42416->10.0.108.37:80: read: connection reset by peer
  Warning  Synced  102s                   flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.48:42696->10.0.108.37:80: read: connection reset by peer
  Warning  Synced  72s                    flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.48:42972->10.0.108.37:80: read: connection reset by peer
  Normal   Synced  42s (x5 over 2m42s)    flagger  Starting canary analysis for podinfo.test
  Warning  Synced  42s                    flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.48:43246->10.0.108.37:80: read: connection reset by peer
  Warning  Synced  12s                    flagger  Rolling back podinfo.test failed checks threshold reached 5
  Warning  Synced  12s                    flagger  Canary failed! Scaling down podinfo.test

@johnsonshi
Copy link
Contributor Author

Tried disabling pre-rollout webhooks and performing canary analysis. This time it went without any issues. The issues only pop up when pre-rollout webhook checks are enabled. See https://docs.flagger.app/usage/webhooks

@michelleN, it seems like my osm/k8s cluster has traffic policy issues preventing the flagger loadtester from being successfully reached? I installed osm with --enable-permissive-traffic-policy, so in theory all pod to pod connections should be allowed. @stefanprodan inputs appreciated as well :)

$ kubectl describe -f ./podinfo-canary.yaml
...
Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2021-06-14T02:14:26Z
    Last Update Time:      2021-06-14T02:14:26Z
    Message:               New revision detected, progressing canary analysis.
    Reason:                Progressing
    Status:                Unknown
    Type:                  Promoted
  Failed Checks:           2
  Iterations:              0
  Last Applied Spec:       5f8fd4f546
  Last Transition Time:    2021-06-14T02:15:56Z
  Phase:                   Progressing
  Tracked Configs:
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  3m24s                  flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  2m54s (x2 over 3m24s)  flagger  all the metrics providers are available!
  Normal   Synced  2m54s                  flagger  Initialization done! podinfo.test
  Normal   Synced  114s                   flagger  New revision detected! Scaling up podinfo.test
  Warning  Synced  84s                    flagger  canary deployment podinfo.test not ready: waiting for rollout to finish: 1 of 2 updated replicas are available
  Warning  Synced  54s                    flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.26:51740->10.0.108.37:80: read: connection reset by peer
  Normal   Synced  24s (x2 over 54s)      flagger  Starting canary analysis for podinfo.test
  Warning  Synced  24s                    flagger  Halt podinfo.test advancement pre-rollout check acceptance-test failed Post "http://flagger-loadtester.test/": read tcp 10.240.0.26:51982->10.0.108.37:80: read: connection reset by peer
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(flagger-osm-demo*) »                                                                                                                                                                                 johnsonshi@OSOC-DLAPT-21
kubectl get pods -n test
osm policy check-pods test/podinfo-5f8fd4f546-frlgk test/flagger-loadtester-64695f854f-nq8l9
osm policy check-pods test/podinfo-5f8fd4f546-xj4k7 test/flagger-loadtester-64695f854f-nq8l9
osm policy check-pods test/flagger-loadtester-64695f854f-nq8l9 test/podinfo-5f8fd4f546-frlgk
osm policy check-pods test/flagger-loadtester-64695f854f-nq8l9 test/podinfo-5f8fd4f546-xj4k7
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(flagger-osm-demo*) » kubectl get pods -n test                                                                                                                                                    1 ↵ johnsonshi@OSOC-DLAPT-21
NAME                                  READY   STATUS    RESTARTS   AGE
flagger-loadtester-64695f854f-nq8l9   2/2     Running   0          41m
podinfo-5f8fd4f546-frlgk              2/2     Running   0          114s
podinfo-5f8fd4f546-xj4k7              2/2     Running   0          2m8s
podinfo-primary-5b5f487c87-rgcfq      2/2     Running   0          2m53s
podinfo-primary-5b5f487c87-w2lrd      2/2     Running   0          3m37s
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(flagger-osm-demo*) »                                                                                                                                                                                 johnsonshi@OSOC-DLAPT-21
osm policy check-pods test/podinfo-5f8fd4f546-frlgk test/flagger-loadtester-64695f854f-nq8l9
[+] Permissive mode enabled for mesh operated by osm-controller running in 'osm-system' namespace

 [+] Pod 'test/podinfo-5f8fd4f546-frlgk' is allowed to communicate to pod 'test/flagger-loadtester-64695f854f-nq8l9'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(flagger-osm-demo*) »                                                                                                                                                                                 johnsonshi@OSOC-DLAPT-21
osm policy check-pods test/podinfo-5f8fd4f546-xj4k7 test/flagger-loadtester-64695f854f-nq8l9
[+] Permissive mode enabled for mesh operated by osm-controller running in 'osm-system' namespace

 [+] Pod 'test/podinfo-5f8fd4f546-xj4k7' is allowed to communicate to pod 'test/flagger-loadtester-64695f854f-nq8l9'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(flagger-osm-demo*) »                                                                                                                                                                                 johnsonshi@OSOC-DLAPT-21
osm policy check-pods test/flagger-loadtester-64695f854f-nq8l9 test/podinfo-5f8fd4f546-frlgk
[+] Permissive mode enabled for mesh operated by osm-controller running in 'osm-system' namespace

 [+] Pod 'test/flagger-loadtester-64695f854f-nq8l9' is allowed to communicate to pod 'test/podinfo-5f8fd4f546-frlgk'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/Dev/flagger(flagger-osm-demo*) »                                                                                                                                                                                 johnsonshi@OSOC-DLAPT-21
osm policy check-pods test/flagger-loadtester-64695f854f-nq8l9 test/podinfo-5f8fd4f546-xj4k7
[+] Permissive mode enabled for mesh operated by osm-controller running in 'osm-system' namespace

 [+] Pod 'test/flagger-loadtester-64695f854f-nq8l9' is allowed to communicate to pod 'test/podinfo-5f8fd4f546-xj4k7'

@stefanprodan
Copy link
Member

Please open a PR for the omitempty removal from v1alpha2 and v1alpha3, we need to remove from both service and weight. Thank you.

Like with AppMesh we could tell people how to allow traffic between the load tester and the app.

@stefanprodan
Copy link
Member

@johnsonshi Flagger 1.12.0 has been released and contains your fix, please give it a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants