Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing client-side certs through two NGINX servers using MTLS and delivering them to a back-end application #11810

Closed
alnhk opened this issue Aug 15, 2024 · 9 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@alnhk
Copy link

alnhk commented Aug 15, 2024

Is it possible to configure NGINX to pass client-side certificates through two NGINX servers and send the original client-side certificate to destination app?

I've included a diagram below:

Highlights are:

  1. curl (or a browser) is configured with client side certs.
  2. Mutual authentication is required on both NGINX services.
    3.Trusted certs and requisite CA certs are configured.
  3. NGINX takes client side cert and passes it as HTTP header fields.
  4. The 2nd NGINX server is actually in a K8S cluster.
  5. In the 1st Nginx, if the proxypass to a upstream or service is exhausted or down, we add the mechanism to return HTTP 503.
  6. Incase 1st Nginx confirms HTTP 503, the request is passed down to 2nd Nginx (in another region).
FILE: /etc/nginx/nginx.conf (on NGINX-A)
http {
    upstream k8s {
        server nginx-b.k8s.example.com;
    }
    server {
        listen                 443 ssl;
        ssl_verify_client      on;
        ssl_certificate        /etc/nginx/certs/identity.pem;
        ssl_certificate_key    /etc/nginx/certs/identity.key;
        ssl_client_certificate /etc/nginx/certs/ca-bundle.crt;

        location /upstream {
            proxy_pass 'https://nginx-b.k8s.example.com';
            proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
            proxy_set_header ssl-client-subject-dn $ssl_client_s_dn;
            proxy_set_header ssl-client-issuer-dn $ssl_client_i_dn;
            
            proxy_ssl_certificate          /etc/nginx/certs/identity.pem;
            proxy_ssl_certificate_key      /etc/nginx/certs/identity.key;
            proxy_ssl_trusted_certificate  /etc/nginx/certs/ca-bundle.crt;
            proxy_ssl_verify               on;
            proxy_ssl_verify_depth         4;
        }

The ingress object for the application has the following configuration for it.

ingress:
  enabled: true
  annotations:
    # Enable client certificate authentication
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    # Create the secret containing the trusted ca certificates
    nginx.ingress.kubernetes.io/auth-tls-secret: "my-ns/ca-bundle"
    # Specify the verification depth in the client certificates chain
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "4"
    # Specify if certificates are passed to upstream server
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
  tls:
    - secretName: cert-to-use
      hosts:
        - mydns.in.k8s.example.com

looking for solution:

Whenever we hit at Nginx-A, the client certificate does show up, however, when the upstream service under Nginx-A is down or exhausted, spill over/failover to Nginx-B happens, everything is confirmed working, however, at Nginx-B, we are not seeing any client certificate passed down. And prints {\x22error\x22: \x22no client certificate\x22})

Any insight is appreciated w.r.t Nginx-B where we expect to see the client certificate passed down so that mTLS to MTLS execution works properly. The reason for the ask is, if we use "ssl_verify_client" is optional, it works fine all the way from client -> Nginx-A (spill over to Nginx-B), however if we set ssl_verify_client to "on", the spill over to Nginx-B will fail with HTTP 400 No required SSL certificate was sent...

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 15, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Aug 15, 2024
@longwuyuan
Copy link
Contributor

  • Embed the image inline in the issue description

  • Add answers to the questions that are asked in a new bug report template

  • Do you mean the ingress-nginx controller every single time you typed NGINX ?

  • Many guesses have to be made to make informed comments so the answers to the questions asked in a new bug report template will help as mentioned before. But ensure to include the kubectl describe output of every single controller ingress and other related resources because describing in words does not make good input for analysis and comments

  • If you meant to ask about the NGINX webserver and reverseproxy by the company NGINX INc that is owned by the company called F5, then this is not the github repo for that

@alnhk
Copy link
Author

alnhk commented Aug 15, 2024

  • Embed the image inline in the issue description
  • Add answers to the questions that are asked in a new bug report template
  • Do you mean the ingress-nginx controller every single time you typed NGINX ?
  • Many guesses have to be made to make informed comments so the answers to the questions asked in a new bug report template will help as mentioned before. But ensure to include the kubectl describe output of every single controller ingress and other related resources because describing in words does not make good input for analysis and comments
  • If you meant to ask about the NGINX webserver and reverseproxy by the company NGINX INc that is owned by the company called F5, then this is not the github repo for that
  1. image name : 3.6.1 nginx-ingress (plus-fips, alpine based), built an image from source with licensed nginx certs.
  2. used "Open a blank issue." and not "bug report template".
    Anyways, below is the answer to the queries to "bug report template"

nginx-ingress-version :

  • nginx version: nginx/1.25.5 (nginx-plus-r32-p1)
  • based out of the ingress-nginx helm chart 3.6.1

kubernetes version : v1.27.11
ENvironment :

  • VMWAre
  • RHEL 8.10
  • cluster created using kubeadm

How was the ingress-nginx-controller installed:

nginx-ingress	acme-dev	32      	2024-07-24 15:23:29.182818927 +0000 UTC	deployed	nginx-ingress-1.3.1      	3.6.1
  1. yes, ingress nginx controller
  2. regarding kubectl-describe, sharing partially) below
  3. not nginx webserver.
Name:             nginx-ingress-dev-controller-6d466f57f-42xhr
Namespace:        acme-dev
Priority:         0
Service Account:  acme-dev
Node:             acme.example.com/10.240.1.201
Start Time:       Thu, 15 Aug 2024 10:17:23 +0000
Labels:           app.kubernetes.io/instance=nginx-ingress
                  app.kubernetes.io/name=nginx-ingress
                  app.kubernetes.io/version=3.6.1-SNAPSHOT
                  app.nginx.org/version=1.25.5-nginx-plus-r32-p1
                  acme-nginx=acme-nginx-ingress
                  pod-template-hash=6d466f57f
Annotations:      kubectl.kubernetes.io/restartedAt: 2024-08-13T06:28:49Z
                  prometheus.io/port: 9113
                  prometheus.io/scheme: http
                  prometheus.io/scrape: true
Status:           Running
SeccompProfile:   RuntimeDefault
IP:               10.9.0.150
IPs:
  IP:           10.9.0.150
Controlled By:  ReplicaSet/nginx-ingress-dev-controller-6d466f57f
Containers:
  nginx-ingress:
    Container ID:  cri-o://ceded0dc0bc08421fe4653572eddb444e720529e1904cfb5e4e04b0623fcc549
    Image:         acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a
    Image ID:      acme.example.com/nginx-controller-custom/nginx-plus-ingress@sha256:f0152a10c0eb0562b856880e0ef1fb0a8942b55f067a46ab25420a2886c52ad7
    Ports:         80/TCP, 443/TCP, 9113/TCP, 8081/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      -nginx-plus=true
      -nginx-reload-timeout=60000
      -enable-app-protect=false
      -enable-app-protect-dos=false
      -nginx-configmaps=$(POD_NAMESPACE)/amce-dev
      -default-server-tls-secret=$(POD_NAMESPACE)/acme-dev-default-server-tls
      -ingress-class=acme-dev
      -watch-namespace=acme-dev
      -health-status=true
      -health-status-uri=/_nginx-health
      -nginx-debug=false
      -v=1
      -nginx-status=true
      -nginx-status-port=8080
      -nginx-status-allow-cidrs=127.0.0.1
      -report-ingress-status
      -external-service=acme-dev-controller
      -enable-leader-election=true
      -leader-election-lock-name=nginx-ingress-leader
      -enable-prometheus-metrics=true
      -prometheus-metrics-listen-port=9113
      -prometheus-tls-secret=
      -enable-service-insight=false
      -service-insight-listen-port=9114
      -service-insight-tls-secret=
      -enable-custom-resources=false
      -enable-snippets=true
      -include-year=false
      -disable-ipv6=false
      -ready-status=true
      -ready-status-port=8081
      -enable-latency-metrics=true
      -ssl-dynamic-reload=true
      -enable-telemetry-reporting=false
      -weight-changes-dynamic-reload=true
    State:          Running
      Started:      Thu, 15 Aug 2024 10:17:28 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  8Gi
    Requests:
      cpu:      4
      memory:   8Gi
    Readiness:  http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s #success=1 #failure=3
    Environment:
      POD_NAMESPACE:  acme-dev (v1:metadata.namespace)
      POD_NAME:       acme-dev-controller-6d466f57f-42xhr (v1:metadata.name)
    Mounts:
      /etc/nginx/root-ca/rootca.pem from root-ca (ro,path="acme-dev-mtls-root-ca")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26zzz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  root-ca:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  acme-dev-mtls-root-ca
    Optional:    false
  nginx-js:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      acme-dev
    Optional:  false
  kube-api-access-26zzz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 15, 2024 via email

@alnhk
Copy link
Author

alnhk commented Aug 15, 2024

What is the helm charts repo and chart name. The info almost proves it's not a release from this project.

On Thu, 15 Aug, 2024, 19:04 alnhk, @.> wrote: - Embed the image inline in the issue description - Add answers to the questions that are asked in a new bug report template - Do you mean the ingress-nginx controller every single time you typed NGINX ? - Many guesses have to be made to make informed comments so the answers to the questions asked in a new bug report template will help as mentioned before. But ensure to include the kubectl describe output of every single controller ingress and other related resources because describing in words does not make good input for analysis and comments - If you meant to ask about the NGINX webserver and reverseproxy by the company NGINX INc that is owned by the company called F5, then this is not the github repo for that 1. image name : 3.6.1 nginx-ingress (plus-fips, alpine based), built an image from source with licensed nginx certs. 2. used "Open a blank issue. https://github.com/kubernetes/ingress-nginx/issues/new" and not "bug report template". Anyways, below is the answer to the queries to "bug report template" nginx-ingress-version : - nginx version: nginx/1.25.5 (nginx-plus-r32-p1) - based out of the ingress-nginx helm chart 3.6.1 kubernetes version : v1.27.11 ENvironment : - VMWAre - RHEL 8.10 - cluster created using kubeadm How was the ingress-nginx-controller installed: nginx-ingress acme-dev 32 2024-07-24 15:23:29.182818927 +0000 UTC deployed nginx-ingress-1.3.1 3.6.1 2. yes, ingress nginx controller 3. regarding kubectl-describe, sharing partially) below 4. not nginx webserver. Name: nginx-ingress-dev-controller-6d466f57f-42xhr Namespace: acme-dev Priority: 0 Service Account: acme-dev Node: acme.example.com/10.240.1.201 Start Time: Thu, 15 Aug 2024 10:17:23 +0000 Labels: app.kubernetes.io/instance=nginx-ingress app.kubernetes.io/name=nginx-ingress app.kubernetes.io/version=3.6.1-SNAPSHOT app.nginx.org/version=1.25.5-nginx-plus-r32-p1 acme-nginx=acme-nginx-ingress pod-template-hash=6d466f57f Annotations: kubectl.kubernetes.io/restartedAt: 2024-08-13T06:28:49Z prometheus.io/port: 9113 prometheus.io/scheme: http prometheus.io/scrape: true Status: Running SeccompProfile: RuntimeDefault IP: 10.9.0.150 IPs: IP: 10.9.0.150 Controlled By: ReplicaSet/nginx-ingress-dev-controller-6d466f57f Containers: nginx-ingress: Container ID: cri-o://ceded0dc0bc08421fe4653572eddb444e720529e1904cfb5e4e04b0623fcc549 Image: acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a Image ID: @.:f0152a10c0eb0562b856880e0ef1fb0a8942b55f067a46ab25420a2886c52ad7 Ports: 80/TCP, 443/TCP, 9113/TCP, 8081/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP Args: -nginx-plus=true -nginx-reload-timeout=60000 -enable-app-protect=false -enable-app-protect-dos=false -nginx-configmaps=$(POD_NAMESPACE)/amce-dev -default-server-tls-secret=$(POD_NAMESPACE)/acme-dev-default-server-tls -ingress-class=acme-dev -watch-namespace=acme-dev -health-status=true -health-status-uri=/_nginx-health -nginx-debug=false -v=1 -nginx-status=true -nginx-status-port=8080 -nginx-status-allow-cidrs=127.0.0.1 -report-ingress-status -external-service=acme-dev-controller -enable-leader-election=true -leader-election-lock-name=nginx-ingress-leader -enable-prometheus-metrics=true -prometheus-metrics-listen-port=9113 -prometheus-tls-secret= -enable-service-insight=false -service-insight-listen-port=9114 -service-insight-tls-secret= -enable-custom-resources=false -enable-snippets=true -include-year=false -disable-ipv6=false -ready-status=true -ready-status-port=8081 -enable-latency-metrics=true -ssl-dynamic-reload=true -enable-telemetry-reporting=false -weight-changes-dynamic-reload=true State: Running Started: Thu, 15 Aug 2024 10:17:28 +0000 Ready: True Restart Count: 0 Limits: cpu: 4 memory: 8Gi Requests: cpu: 4 memory: 8Gi Readiness: http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s #success=1 #failure=3 Environment: POD_NAMESPACE: acme-dev (v1:metadata.namespace) POD_NAME: acme-dev-controller-6d466f57f-42xhr (v1:metadata.name) Mounts: /etc/nginx/root-ca/rootca.pem from root-ca (ro,path="acme-dev-mtls-root-ca") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26zzz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: root-ca: Type: Secret (a volume populated by a Secret) SecretName: acme-dev-mtls-root-ca Optional: false nginx-js: Type: ConfigMap (a volume populated by a ConfigMap) Name: acme-dev Optional: false kube-api-access-26zzz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Guaranteed Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: — Reply to this email directly, view it on GitHub <#11810 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSQBB4TRRKX5LGH5FTZRSU6XAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGI3TQNBUGE . You are receiving this because you commented.Message ID: @.***>

Regarding "helm charts repo and chart name" = We usually do this way :

We have been doing this all the way since 3.4.x and deployment was good, however, we are trying to achieve "mtls to mtls" handshake as described here.

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 15, 2024 via email

@alnhk
Copy link
Author

alnhk commented Aug 15, 2024

This is final and conclusive proof of wrong GitHub project. This is the K8S community project. That link is the NGINX INC project .

On Thu, 15 Aug, 2024, 19:22 alnhk, @.> wrote: What is the helm charts repo and chart name. The info almost proves it's not a release from this project. … <#m_8488301716775187443_> On Thu, 15 Aug, 2024, 19:04 alnhk, @.> wrote: - Embed the image inline in the issue description - Add answers to the questions that are asked in a new bug report template - Do you mean the ingress-nginx controller every single time you typed NGINX ? - Many guesses have to be made to make informed comments so the answers to the questions asked in a new bug report template will help as mentioned before. But ensure to include the kubectl describe output of every single controller ingress and other related resources because describing in words does not make good input for analysis and comments - If you meant to ask about the NGINX webserver and reverseproxy by the company NGINX INc that is owned by the company called F5, then this is not the github repo for that 1. image name : 3.6.1 nginx-ingress (plus-fips, alpine based), built an image from source with licensed nginx certs. 2. used "Open a blank issue. https://github.com/kubernetes/ingress-nginx/issues/new https://github.com/kubernetes/ingress-nginx/issues/new" and not "bug report template". Anyways, below is the answer to the queries to "bug report template" nginx-ingress-version : - nginx version: nginx/1.25.5 (nginx-plus-r32-p1) - based out of the ingress-nginx helm chart 3.6.1 kubernetes version : v1.27.11 ENvironment : - VMWAre - RHEL 8.10 - cluster created using kubeadm How was the ingress-nginx-controller installed: nginx-ingress acme-dev 32 2024-07-24 15:23:29.182818927 +0000 UTC deployed nginx-ingress-1.3.1 3.6.1 2. yes, ingress nginx controller 3. regarding kubectl-describe, sharing partially) below 4. not nginx webserver. Name: nginx-ingress-dev-controller-6d466f57f-42xhr Namespace: acme-dev Priority: 0 Service Account: acme-dev Node: acme.example.com/10.240.1.201 http://acme.example.com/10.240.1.201 Start Time: Thu, 15 Aug 2024 10:17:23 +0000 Labels: app.kubernetes.io/instance=nginx-ingress http://app.kubernetes.io/instance=nginx-ingress app.kubernetes.io/name=nginx-ingress http://app.kubernetes.io/name=nginx-ingress app.kubernetes.io/version=3.6.1-SNAPSHOT http://app.kubernetes.io/version=3.6.1-SNAPSHOT app.nginx.org/version=1.25.5-nginx-plus-r32-p1 http://app.nginx.org/version=1.25.5-nginx-plus-r32-p1 acme-nginx=acme-nginx-ingress pod-template-hash=6d466f57f Annotations: kubectl.kubernetes.io/restartedAt http://kubectl.kubernetes.io/restartedAt: 2024-08-13T06:28:49Z prometheus.io/port http://prometheus.io/port: 9113 prometheus.io/scheme http://prometheus.io/scheme: http prometheus.io/scrape http://prometheus.io/scrape: true Status: Running SeccompProfile: RuntimeDefault IP: 10.9.0.150 IPs: IP: 10.9.0.150 Controlled By: ReplicaSet/nginx-ingress-dev-controller-6d466f57f Containers: nginx-ingress: Container ID: cri-o://ceded0dc0bc08421fe4653572eddb444e720529e1904cfb5e4e04b0623fcc549 Image: acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a http://acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a Image ID: @.:f0152a10c0eb0562b856880e0ef1fb0a8942b55f067a46ab25420a2886c52ad7 Ports: 80/TCP, 443/TCP, 9113/TCP, 8081/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP Args: -nginx-plus=true -nginx-reload-timeout=60000 -enable-app-protect=false -enable-app-protect-dos=false -nginx-configmaps=$(POD_NAMESPACE)/amce-dev -default-server-tls-secret=$(POD_NAMESPACE)/acme-dev-default-server-tls -ingress-class=acme-dev -watch-namespace=acme-dev -health-status=true -health-status-uri=/_nginx-health -nginx-debug=false -v=1 -nginx-status=true -nginx-status-port=8080 -nginx-status-allow-cidrs=127.0.0.1 -report-ingress-status -external-service=acme-dev-controller -enable-leader-election=true -leader-election-lock-name=nginx-ingress-leader -enable-prometheus-metrics=true -prometheus-metrics-listen-port=9113 -prometheus-tls-secret= -enable-service-insight=false -service-insight-listen-port=9114 -service-insight-tls-secret= -enable-custom-resources=false -enable-snippets=true -include-year=false -disable-ipv6=false -ready-status=true -ready-status-port=8081 -enable-latency-metrics=true -ssl-dynamic-reload=true -enable-telemetry-reporting=false -weight-changes-dynamic-reload=true State: Running Started: Thu, 15 Aug 2024 10:17:28 +0000 Ready: True Restart Count: 0 Limits: cpu: 4 memory: 8Gi Requests: cpu: 4 memory: 8Gi Readiness: http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s #success=1 #failure=3 Environment: POD_NAMESPACE: acme-dev (v1:metadata.namespace) POD_NAME: acme-dev-controller-6d466f57f-42xhr (v1: metadata.name) Mounts: /etc/nginx/root-ca/rootca.pem from root-ca (ro,path="acme-dev-mtls-root-ca") /var/run/secrets/ kubernetes.io/serviceaccount from kube-api-access-26zzz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: root-ca: Type: Secret (a volume populated by a Secret) SecretName: acme-dev-mtls-root-ca Optional: false nginx-js: Type: ConfigMap (a volume populated by a ConfigMap) Name: acme-dev Optional: false kube-api-access-26zzz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Guaranteed Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: — Reply to this email directly, view it on GitHub <#11810 (comment) <#11810 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSQBB4TRRKX5LGH5FTZRSU6XAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGI3TQNBUGE . You are receiving this because you commented.Message ID: @.> Regarding "helm charts repo and chart name" = We usually do this way : - clone https://github.com/nginxinc/kubernetes-ingress/tree/v3.6.1. (v3.6.1) - copy the "charts/nginx-ingress" to our own repository - we use internal tool to deploy it. — Reply to this email directly, view it on GitHub <#11810 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWXN2EDAPMCYKKKGI6TZRSXBLAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGMYDMOJYGE . You are receiving this because you commented.Message ID: @.*>

So, wasting the time, is this below also wrong github project and not k8s ?
#3511

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 15, 2024 via email

@strongjz
Copy link
Member

This is the kubernetes sub project, not the F5 nginx ingress controller or nginx.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

4 participants