Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve API documentation to explain sourceLabels and sourceNamespaces are selectors, not filters #3258

Open
2 tasks done
pavelkuchin opened this issue May 30, 2024 · 2 comments

Comments

@pavelkuchin
Copy link

pavelkuchin commented May 30, 2024

Is this the right place to submit this?

  • This is not a security vulnerability or a crashing bug
  • This is not a question about how to use Istio

Bug Description

If we match route by sourceLabels only and the label is not present in original pod then istio routes the request to k8s service (round robin across all subsets) instead of returning 404.

If I use queryParams instead of sourceLabels then istio returns 404 as expected.

The issue is reproducible in 1.22.0

apiVersion: apps/v1
kind: Deployment
metadata:
  name: appa-dev-deployment
  labels:
    app: appa
    venv: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: appa
      venv: dev
  template:
    metadata:
      labels:
        app: appa
        venv: dev
    spec:
      containers:
      - name: appa-dev
        image: localhost:5000/appaa:latest
        imagePullPolicy: Always
        env:
        - name: VENV
          value: appa-dev
        ports:
        - containerPort: 5000
apiVersion: apps/v1
kind: Deployment
metadata:
  name: appa-qa-deployment
  labels:
    app: appa
    venv: qa
spec:
  replicas: 1
  selector:
    matchLabels:
      app: appa
      venv: qa
  template:
    metadata:
      labels:
        app: appa
        venv: qa
    spec:
      containers:
      - name: appa-qa
        image: localhost:5000/appaa:latest
        imagePullPolicy: Always
        env:
        - name: VENV
          value: appa-qa
        ports:
        - containerPort: 5000
apiVersion: v1
kind: Service
metadata:
  name: appa
spec:
  selector:
    app: appa
  ports:
    - protocol: TCP
      name: http-app
      port: 80
      targetPort: 5000
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: dr-appa
spec:
  host: appa
  subsets:
  - name: qa
    labels:
      app: appa
      venv: qa
  - name: dev
    labels:
      app: appa
      venv: dev
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vs-appa-qa
spec:
  hosts:
  - appa.test.svc.cluster.local
  gateways:
  - mesh
  http:
  - name: "vs-appa-qa-route"
    match:
    - uri:
        prefix: "/test"
      sourceLabels:
        venv: qa
    route:
    - destination:
        host: appa.test.svc.cluster.local
        subset: qa
  - name: "vs-appa-dev-route"
    match:
    - uri:
        prefix: "/test"
      sourceLabels:
        venv: dev
    route:
    - destination:
        host: appa.test.svc.cluster.local
        subset: dev

Testing with label:

MAC-X:local-istio-sandbox Pavel.Kuchin$ kubectl --context kind-kind --kubeconfig ~/.kube/kind -n test run -i --tty --rm alpine -l "venv=qa" --image=alpine/curl --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # curl appa/test
Hello from app A! - appa-qa/ # 
/ # curl appa/test
Hello from app A! - appa-qa/ # 
/ # curl appa/test
Hello from app A! - appa-qa/ # 
/ # curl appa/test
Hello from app A! - appa-qa/ # 
/ # curl appa/test
Hello from app A! - appa-qa/ # 

Testing without label:

MAC-X:local-istio-sandbox Pavel.Kuchin$ kubectl --context kind-kind --kubeconfig ~/.kube/kind -n test run -i --tty --rm alpine --image=alpine/curl --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # curl appa/test
Hello from app A! - appa-qa/ # 
/ # curl appa/test
Hello from app A! - appa-dev/ # 
/ # curl appa/test
Hello from app A! - appa-dev/ # 
/ # curl appa/test
Hello from app A! - appa-qa/ # 
/ # curl appa/test
Hello from app A! - appa-dev/ # 

Version

MAC-X:local-istio-sandbox Pavel.Kuchin$ istioctl --context kind-kind --kubeconfig ~/.kube/kind -n test version
client version: 1.15.0
control plane version: 1.22.0
data plane version: 1.22.0 (4 proxies)
MAC-X:local-istio-sandbox Pavel.Kuchin$ kubectl --context kind-kind --kubeconfig ~/.kube/kind -n test version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.1
Kustomize Version: v4.5.7
Server Version: v1.25.0

Additional Information

MAC-X:local-istio-sandbox Pavel.Kuchin$ istioctl --context kind-kind --kubeconfig ~/.kube/kind -n test bug-report

Target cluster context: kind-kind
Running with the following config: 

kubeconfig: /Users/Pavel.Kuchin/.kube/kind
context: kind-kind
istio-namespace: istio-system
full-secrets: false
timeout (mins): 30
include: {  }
exclude: { Namespaces: kube-node-lease,kube-public,kube-system,local-path-storage }
end-time: 2024-05-30 18:35:53.78819 -0400 EDT



Cluster endpoint: https://127.0.0.1:57549
CLI version:
version.BuildInfo{Version:"1.15.0", GitRevision:"e3364ab424b70ca8ee1ca76cb0b3afb73476aaac-dirty", GolangVersion:"go1.18.5", BuildStatus:"Clean", GitTag:"1.15.0"}

The following Istio control plane revisions/versions were found in the cluster:
Revision 1-22:
&version.MeshInfo{
    {
        Component: "istiod",
        Info:      version.BuildInfo{Version:"1.22.0", GitRevision:"aaf597fbfae607adf4bb4e77538a7ea98995328a", GolangVersion:"", BuildStatus:"Clean", GitTag:"1.22.0"},
    },
}

The following proxy revisions/versions were found in the cluster:
Revision 1-22: Versions {1.22.0}


Fetching proxy logs for the following containers:

istio-system/istio-ingressgateway/istio-ingressgateway-778f87f797-t8zjr/istio-proxy
istio-system/istiod-1-22/istiod-1-22-68cd7d49c8-n766z/discovery
kubernetes-dashboard/dashboard-metrics-scraper/dashboard-metrics-scraper-7cc7856cfb-cjn2d/dashboard-metrics-scraper
kubernetes-dashboard/kubernetes-dashboard/kubernetes-dashboard-b8df5b7bc-2pggm/kubernetes-dashboard
test//alpine/alpine
test//alpine/istio-proxy
test/appa-dev-deployment/appa-dev-deployment-7b5ddbc69f-hqlhs/appa-dev
test/appa-dev-deployment/appa-dev-deployment-7b5ddbc69f-hqlhs/istio-proxy
test/appa-qa-deployment/appa-qa-deployment-8d8885b8d-27gw2/appa-qa
test/appa-qa-deployment/appa-qa-deployment-8d8885b8d-27gw2/istio-proxy

Fetching Istio control plane information from cluster.

Running istio analyze on all namespaces and report as below:
Analysis Report:
Warning [IST0108] (Pod test/alpine) Unknown annotation: istio.io/rev
Warning [IST0108] (Pod test/appa-dev-deployment-7b5ddbc69f-hqlhs) Unknown annotation: istio.io/rev
Warning [IST0108] (Pod test/appa-qa-deployment-8d8885b8d-27gw2) Unknown annotation: istio.io/rev
Info [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace kubernetes-dashboard) The namespace is not enabled for Istio injection. Run 'kubectl label namespace kubernetes-dashboard istio-injection=enabled' to enable it, or 'kubectl label namespace kubernetes-dashboard istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0118] (Service kubernetes-dashboard/dashboard-metrics-scraper) Port name  (port: 8000, targetPort: 8000) doesn't follow the naming convention of Istio port.
Info [IST0118] (Service kubernetes-dashboard/kubernetes-dashboard) Port name  (port: 443, targetPort: 8443) doesn't follow the naming convention of Istio port.
Done.
@howardjohn
Copy link
Member

Interesting. I am actually not sure this is obviously the wrong behavior (certainly is under-specified).

I can see the argument it should 404 - any other unmatched rule would, so why not sourceLabels?

However, this and sourceNamespace are not like other match rules; probably, they should not have been under match but rather a top level workloadSelector like other APIs. These APIs are not doing runtime matches, but actually filtering which workloads the VS applies to. The docs somewhat hint at this: "labels that constrain the applicability of a rule to source (client) workloads with the given labels"

@pavelkuchin
Copy link
Author

Interesting. I am actually not sure this is obviously the wrong behavior (certainly is under-specified).

I can see the argument it should 404 - any other unmatched rule would, so why not sourceLabels?

However, this and sourceNamespace are not like other match rules; probably, they should not have been under match but rather a top level workloadSelector like other APIs. These APIs are not doing runtime matches, but actually filtering which workloads the VS applies to. The docs somewhat hint at this: "labels that constrain the applicability of a rule to source (client) workloads with the given labels"

Got it, thank you for the clarification @howardjohn! It would definitely be helpful to note the behavior explicitly in the documentation. (Imho the hint is great but it makes you double guess how it really works)

@howardjohn howardjohn changed the title Istio routing doesn't work If matching by sourceLabels and the label is not present in calling pod. improve API documentation to explain sourceLabels and sourceNamespaces are selectors, not filters Jul 8, 2024
@howardjohn howardjohn transferred this issue from istio/istio Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants