-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passing client-side certs through two NGINX servers using MTLS and delivering them to a back-end application #11810
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
nginx-ingress-version :
kubernetes version : v1.27.11
How was the ingress-nginx-controller installed:
|
What is the helm charts repo and chart name. The info almost proves it's
not a release from this project.
…On Thu, 15 Aug, 2024, 19:04 alnhk, ***@***.***> wrote:
- Embed the image inline in the issue description
- Add answers to the questions that are asked in a new bug report
template
- Do you mean the ingress-nginx controller every single time you typed
NGINX ?
- Many guesses have to be made to make informed comments so the
answers to the questions asked in a new bug report template will help as
mentioned before. But ensure to include the kubectl describe output of
every single controller ingress and other related resources because
describing in words does not make good input for analysis and comments
- If you meant to ask about the NGINX webserver and reverseproxy by
the company NGINX INc that is owned by the company called F5, then this is
not the github repo for that
1. image name : 3.6.1 nginx-ingress (plus-fips, alpine based), built
an image from source with licensed nginx certs.
2. used "Open a blank issue.
<https://github.com/kubernetes/ingress-nginx/issues/new>" and not "bug
report template".
Anyways, below is the answer to the queries to "bug report template"
nginx-ingress-version :
- nginx version: nginx/1.25.5 (nginx-plus-r32-p1)
- based out of the ingress-nginx helm chart 3.6.1
kubernetes version : v1.27.11
ENvironment :
- VMWAre
- RHEL 8.10
- cluster created using kubeadm
*How was the ingress-nginx-controller installed*:
nginx-ingress acme-dev 32 2024-07-24 15:23:29.182818927 +0000 UTC deployed nginx-ingress-1.3.1 3.6.1
2. yes, ingress nginx controller
3. regarding kubectl-describe, sharing partially) below
4. not nginx webserver.
Name: nginx-ingress-dev-controller-6d466f57f-42xhr
Namespace: acme-dev
Priority: 0
Service Account: acme-dev
Node: acme.example.com/10.240.1.201
Start Time: Thu, 15 Aug 2024 10:17:23 +0000
Labels: app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/name=nginx-ingress
app.kubernetes.io/version=3.6.1-SNAPSHOT
app.nginx.org/version=1.25.5-nginx-plus-r32-p1
acme-nginx=acme-nginx-ingress
pod-template-hash=6d466f57f
Annotations: kubectl.kubernetes.io/restartedAt: 2024-08-13T06:28:49Z
prometheus.io/port: 9113
prometheus.io/scheme: http
prometheus.io/scrape: true
Status: Running
SeccompProfile: RuntimeDefault
IP: 10.9.0.150
IPs:
IP: 10.9.0.150
Controlled By: ReplicaSet/nginx-ingress-dev-controller-6d466f57f
Containers:
nginx-ingress:
Container ID: cri-o://ceded0dc0bc08421fe4653572eddb444e720529e1904cfb5e4e04b0623fcc549
Image: acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a
Image ID: ***@***.***:f0152a10c0eb0562b856880e0ef1fb0a8942b55f067a46ab25420a2886c52ad7
Ports: 80/TCP, 443/TCP, 9113/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
-nginx-plus=true
-nginx-reload-timeout=60000
-enable-app-protect=false
-enable-app-protect-dos=false
-nginx-configmaps=$(POD_NAMESPACE)/amce-dev
-default-server-tls-secret=$(POD_NAMESPACE)/acme-dev-default-server-tls
-ingress-class=acme-dev
-watch-namespace=acme-dev
-health-status=true
-health-status-uri=/_nginx-health
-nginx-debug=false
-v=1
-nginx-status=true
-nginx-status-port=8080
-nginx-status-allow-cidrs=127.0.0.1
-report-ingress-status
-external-service=acme-dev-controller
-enable-leader-election=true
-leader-election-lock-name=nginx-ingress-leader
-enable-prometheus-metrics=true
-prometheus-metrics-listen-port=9113
-prometheus-tls-secret=
-enable-service-insight=false
-service-insight-listen-port=9114
-service-insight-tls-secret=
-enable-custom-resources=false
-enable-snippets=true
-include-year=false
-disable-ipv6=false
-ready-status=true
-ready-status-port=8081
-enable-latency-metrics=true
-ssl-dynamic-reload=true
-enable-telemetry-reporting=false
-weight-changes-dynamic-reload=true
State: Running
Started: Thu, 15 Aug 2024 10:17:28 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 4
memory: 8Gi
Requests:
cpu: 4
memory: 8Gi
Readiness: http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s #success=1 #failure=3
Environment:
POD_NAMESPACE: acme-dev (v1:metadata.namespace)
POD_NAME: acme-dev-controller-6d466f57f-42xhr (v1:metadata.name)
Mounts:
/etc/nginx/root-ca/rootca.pem from root-ca (ro,path="acme-dev-mtls-root-ca")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26zzz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
root-ca:
Type: Secret (a volume populated by a Secret)
SecretName: acme-dev-mtls-root-ca
Optional: false
nginx-js:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: acme-dev
Optional: false
kube-api-access-26zzz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
—
Reply to this email directly, view it on GitHub
<#11810 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWSQBB4TRRKX5LGH5FTZRSU6XAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGI3TQNBUGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Regarding "helm charts repo and chart name" = We usually do this way :
We have been doing this all the way since 3.4.x and deployment was good, however, we are trying to achieve "mtls to mtls" handshake as described here. |
This is final and conclusive proof of wrong GitHub project. This is the K8S
community project. That link is the NGINX INC project .
…On Thu, 15 Aug, 2024, 19:22 alnhk, ***@***.***> wrote:
What is the helm charts repo and chart name. The info almost proves it's
not a release from this project.
… <#m_8488301716775187443_>
On Thu, 15 Aug, 2024, 19:04 alnhk, *@*.*> wrote: - Embed the image inline
in the issue description - Add answers to the questions that are asked in a
new bug report template - Do you mean the ingress-nginx controller every
single time you typed NGINX ? - Many guesses have to be made to make
informed comments so the answers to the questions asked in a new bug report
template will help as mentioned before. But ensure to include the kubectl
describe output of every single controller ingress and other related
resources because describing in words does not make good input for analysis
and comments - If you meant to ask about the NGINX webserver and
reverseproxy by the company NGINX INc that is owned by the company called
F5, then this is not the github repo for that 1. image name : 3.6.1
nginx-ingress (plus-fips, alpine based), built an image from source with
licensed nginx certs. 2. used "Open a blank issue.
https://github.com/kubernetes/ingress-nginx/issues/new
<https://github.com/kubernetes/ingress-nginx/issues/new>" and not "bug
report template". Anyways, below is the answer to the queries to "bug
report template" nginx-ingress-version : - nginx version: nginx/1.25.5
(nginx-plus-r32-p1) - based out of the ingress-nginx helm chart 3.6.1
kubernetes version : v1.27.11 ENvironment : - VMWAre - RHEL 8.10 - cluster
created using kubeadm How was the ingress-nginx-controller installed:
nginx-ingress acme-dev 32 2024-07-24 15:23:29.182818927 +0000 UTC deployed
nginx-ingress-1.3.1 3.6.1 2. yes, ingress nginx controller 3. regarding
kubectl-describe, sharing partially) below 4. not nginx webserver. Name:
nginx-ingress-dev-controller-6d466f57f-42xhr Namespace: acme-dev Priority:
0 Service Account: acme-dev Node: acme.example.com/10.240.1.201
<http://acme.example.com/10.240.1.201> Start Time: Thu, 15 Aug 2024
10:17:23 +0000 Labels: app.kubernetes.io/instance=nginx-ingress
<http://app.kubernetes.io/instance=nginx-ingress>
app.kubernetes.io/name=nginx-ingress
<http://app.kubernetes.io/name=nginx-ingress>
app.kubernetes.io/version=3.6.1-SNAPSHOT
<http://app.kubernetes.io/version=3.6.1-SNAPSHOT>
app.nginx.org/version=1.25.5-nginx-plus-r32-p1
<http://app.nginx.org/version=1.25.5-nginx-plus-r32-p1>
acme-nginx=acme-nginx-ingress pod-template-hash=6d466f57f Annotations:
kubectl.kubernetes.io/restartedAt
<http://kubectl.kubernetes.io/restartedAt>: 2024-08-13T06:28:49Z
prometheus.io/port <http://prometheus.io/port>: 9113 prometheus.io/scheme
<http://prometheus.io/scheme>: http prometheus.io/scrape
<http://prometheus.io/scrape>: true Status: Running SeccompProfile:
RuntimeDefault IP: 10.9.0.150 IPs: IP: 10.9.0.150 Controlled By:
ReplicaSet/nginx-ingress-dev-controller-6d466f57f Containers:
nginx-ingress: Container ID:
cri-o://ceded0dc0bc08421fe4653572eddb444e720529e1904cfb5e4e04b0623fcc549
Image:
acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a
<http://acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a>
Image ID: @.*:f0152a10c0eb0562b856880e0ef1fb0a8942b55f067a46ab25420a2886c52ad7
Ports: 80/TCP, 443/TCP, 9113/TCP, 8081/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP,
0/TCP Args: -nginx-plus=true -nginx-reload-timeout=60000
-enable-app-protect=false -enable-app-protect-dos=false
-nginx-configmaps=$(POD_NAMESPACE)/amce-dev
-default-server-tls-secret=$(POD_NAMESPACE)/acme-dev-default-server-tls
-ingress-class=acme-dev -watch-namespace=acme-dev -health-status=true
-health-status-uri=/_nginx-health -nginx-debug=false -v=1
-nginx-status=true -nginx-status-port=8080
-nginx-status-allow-cidrs=127.0.0.1 -report-ingress-status
-external-service=acme-dev-controller -enable-leader-election=true
-leader-election-lock-name=nginx-ingress-leader
-enable-prometheus-metrics=true -prometheus-metrics-listen-port=9113
-prometheus-tls-secret= -enable-service-insight=false
-service-insight-listen-port=9114 -service-insight-tls-secret=
-enable-custom-resources=false -enable-snippets=true -include-year=false
-disable-ipv6=false -ready-status=true -ready-status-port=8081
-enable-latency-metrics=true -ssl-dynamic-reload=true
-enable-telemetry-reporting=false -weight-changes-dynamic-reload=true
State: Running Started: Thu, 15 Aug 2024 10:17:28 +0000 Ready: True Restart
Count: 0 Limits: cpu: 4 memory: 8Gi Requests: cpu: 4 memory: 8Gi Readiness:
http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s
#success=1 #failure=3 Environment: POD_NAMESPACE: acme-dev
(v1:metadata.namespace) POD_NAME: acme-dev-controller-6d466f57f-42xhr (v1:
metadata.name) Mounts: /etc/nginx/root-ca/rootca.pem from root-ca
(ro,path="acme-dev-mtls-root-ca") /var/run/secrets/
kubernetes.io/serviceaccount from kube-api-access-26zzz (ro) Conditions:
Type Status Initialized True Ready True ContainersReady True PodScheduled
True Volumes: root-ca: Type: Secret (a volume populated by a Secret)
SecretName: acme-dev-mtls-root-ca Optional: false nginx-js: Type: ConfigMap
(a volume populated by a ConfigMap) Name: acme-dev Optional: false
kube-api-access-26zzz: Type: Projected (a volume that contains injected
data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName:
kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Guaranteed
Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute
op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for
300s Events: — Reply to this email directly, view it on GitHub <#11810
(comment)
<#11810 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABGZVWSQBB4TRRKX5LGH5FTZRSU6XAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGI3TQNBUGE
. You are receiving this because you commented.Message ID: *@*.***>
Regarding "helm charts repo and chart name" = We usually do this way :
- clone https://github.com/nginxinc/kubernetes-ingress/tree/v3.6.1.
(v3.6.1)
- copy the "charts/nginx-ingress" to our own repository
- we use internal tool to deploy it.
—
Reply to this email directly, view it on GitHub
<#11810 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWXN2EDAPMCYKKKGI6TZRSXBLAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGMYDMOJYGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
So, wasting the time, is this below also wrong github project and not k8s ? |
That issue is obviously on this project.
…On Thu, 15 Aug, 2024, 20:09 alnhk, ***@***.***> wrote:
This is final and conclusive proof of wrong GitHub project. This is the
K8S community project. That link is the NGINX INC project .
… <#m_6681497976430543847_>
On Thu, 15 Aug, 2024, 19:22 alnhk, *@*.*> wrote: What is the helm charts
repo and chart name. The info almost proves it's not a release from this
project. … <#m_8488301716775187443_> On Thu, 15 Aug, 2024, 19:04 alnhk, @.>
wrote: - Embed the image inline in the issue description - Add answers to
the questions that are asked in a new bug report template - Do you mean the
ingress-nginx controller every single time you typed NGINX ? - Many guesses
have to be made to make informed comments so the answers to the questions
asked in a new bug report template will help as mentioned before. But
ensure to include the kubectl describe output of every single controller
ingress and other related resources because describing in words does not
make good input for analysis and comments - If you meant to ask about the
NGINX webserver and reverseproxy by the company NGINX INc that is owned by
the company called F5, then this is not the github repo for that 1. image
name : 3.6.1 nginx-ingress (plus-fips, alpine based), built an image from
source with licensed nginx certs. 2. used "Open a blank issue.
https://github.com/kubernetes/ingress-nginx/issues/new
<https://github.com/kubernetes/ingress-nginx/issues/new>
https://github.com/kubernetes/ingress-nginx/issues/new
<https://github.com/kubernetes/ingress-nginx/issues/new>" and not "bug
report template". Anyways, below is the answer to the queries to "bug
report template" nginx-ingress-version : - nginx version: nginx/1.25.5
(nginx-plus-r32-p1) - based out of the ingress-nginx helm chart 3.6.1
kubernetes version : v1.27.11 ENvironment : - VMWAre - RHEL 8.10 - cluster
created using kubeadm How was the ingress-nginx-controller installed:
nginx-ingress acme-dev 32 2024-07-24 15:23:29.182818927 +0000 UTC deployed
nginx-ingress-1.3.1 3.6.1 2. yes, ingress nginx controller 3. regarding
kubectl-describe, sharing partially) below 4. not nginx webserver. Name:
nginx-ingress-dev-controller-6d466f57f-42xhr Namespace: acme-dev Priority:
0 Service Account: acme-dev Node: acme.example.com/10.240.1.201
<http://acme.example.com/10.240.1.201> http://acme.example.com/10.240.1.201
<http://acme.example.com/10.240.1.201> Start Time: Thu, 15 Aug 2024
10:17:23 +0000 Labels: app.kubernetes.io/instance=nginx-ingress
<http://app.kubernetes.io/instance=nginx-ingress>
http://app.kubernetes.io/instance=nginx-ingress
<http://app.kubernetes.io/instance=nginx-ingress>
app.kubernetes.io/name=nginx-ingress
<http://app.kubernetes.io/name=nginx-ingress>
http://app.kubernetes.io/name=nginx-ingress
<http://app.kubernetes.io/name=nginx-ingress>
app.kubernetes.io/version=3.6.1-SNAPSHOT
<http://app.kubernetes.io/version=3.6.1-SNAPSHOT>
http://app.kubernetes.io/version=3.6.1-SNAPSHOT
<http://app.kubernetes.io/version=3.6.1-SNAPSHOT>
app.nginx.org/version=1.25.5-nginx-plus-r32-p1
<http://app.nginx.org/version=1.25.5-nginx-plus-r32-p1>
http://app.nginx.org/version=1.25.5-nginx-plus-r32-p1
<http://app.nginx.org/version=1.25.5-nginx-plus-r32-p1>
acme-nginx=acme-nginx-ingress pod-template-hash=6d466f57f Annotations:
kubectl.kubernetes.io/restartedAt
<http://kubectl.kubernetes.io/restartedAt>
http://kubectl.kubernetes.io/restartedAt
<http://kubectl.kubernetes.io/restartedAt>: 2024-08-13T06:28:49Z
prometheus.io/port <http://prometheus.io/port> http://prometheus.io/port
<http://prometheus.io/port>: 9113 prometheus.io/scheme
<http://prometheus.io/scheme> http://prometheus.io/scheme
<http://prometheus.io/scheme>: http prometheus.io/scrape
<http://prometheus.io/scrape> http://prometheus.io/scrape
<http://prometheus.io/scrape>: true Status: Running SeccompProfile:
RuntimeDefault IP: 10.9.0.150 IPs: IP: 10.9.0.150 Controlled By:
ReplicaSet/nginx-ingress-dev-controller-6d466f57f Containers:
nginx-ingress: Container ID:
cri-o://ceded0dc0bc08421fe4653572eddb444e720529e1904cfb5e4e04b0623fcc549
Image:
acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a
<http://acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a>
http://acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a
<http://acme.example.com/nginx-controller-custom/nginx-plus-ingress:3.6.1-alpine-image-plus-fips-572aae2a>
Image ID:
@.:f0152a10c0eb0562b856880e0ef1fb0a8942b55f067a46ab25420a2886c52ad7 Ports:
80/TCP, 443/TCP, 9113/TCP, 8081/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args: -nginx-plus=true -nginx-reload-timeout=60000
-enable-app-protect=false -enable-app-protect-dos=false
-nginx-configmaps=$(POD_NAMESPACE)/amce-dev
-default-server-tls-secret=$(POD_NAMESPACE)/acme-dev-default-server-tls
-ingress-class=acme-dev -watch-namespace=acme-dev -health-status=true
-health-status-uri=/_nginx-health -nginx-debug=false -v=1
-nginx-status=true -nginx-status-port=8080
-nginx-status-allow-cidrs=127.0.0.1 -report-ingress-status
-external-service=acme-dev-controller -enable-leader-election=true
-leader-election-lock-name=nginx-ingress-leader
-enable-prometheus-metrics=true -prometheus-metrics-listen-port=9113
-prometheus-tls-secret= -enable-service-insight=false
-service-insight-listen-port=9114 -service-insight-tls-secret=
-enable-custom-resources=false -enable-snippets=true -include-year=false
-disable-ipv6=false -ready-status=true -ready-status-port=8081
-enable-latency-metrics=true -ssl-dynamic-reload=true
-enable-telemetry-reporting=false -weight-changes-dynamic-reload=true
State: Running Started: Thu, 15 Aug 2024 10:17:28 +0000 Ready: True Restart
Count: 0 Limits: cpu: 4 memory: 8Gi Requests: cpu: 4 memory: 8Gi Readiness:
http-get http://:readiness-port/nginx-ready delay=0s timeout=1s period=1s
#success=1 #failure=3 Environment: POD_NAMESPACE: acme-dev
(v1:metadata.namespace) POD_NAME: acme-dev-controller-6d466f57f-42xhr (v1:
metadata.name <http://metadata.name>) Mounts: /etc/nginx/root-ca/rootca.pem
from root-ca (ro,path="acme-dev-mtls-root-ca") /var/run/secrets/
kubernetes.io/serviceaccount <http://kubernetes.io/serviceaccount> from
kube-api-access-26zzz (ro) Conditions: Type Status Initialized True Ready
True ContainersReady True PodScheduled True Volumes: root-ca: Type: Secret
(a volume populated by a Secret) SecretName: acme-dev-mtls-root-ca
Optional: false nginx-js: Type: ConfigMap (a volume populated by a
ConfigMap) Name: acme-dev Optional: false kube-api-access-26zzz: Type:
Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt
ConfigMapOptional: DownwardAPI: true QoS Class: Guaranteed Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute
<http://node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute
<http://node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events: — Reply to this email directly, view it on GitHub <#11810
<#11810> (comment)
<#11810 (comment)
<#11810 (comment)>>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABGZVWSQBB4TRRKX5LGH5FTZRSU6XAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGI3TQNBUGE
<https://github.com/notifications/unsubscribe-auth/ABGZVWSQBB4TRRKX5LGH5FTZRSU6XAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGI3TQNBUGE>
. You are receiving this because you commented.Message ID: @.**>
Regarding "helm charts repo and chart name" = We usually do this way : -
clone https://github.com/nginxinc/kubernetes-ingress/tree/v3.6.1
<https://github.com/nginxinc/kubernetes-ingress/tree/v3.6.1>. (v3.6.1) -
copy the "charts/nginx-ingress" to our own repository - we use internal
tool to deploy it. — Reply to this email directly, view it on GitHub
<#11810 (comment)
<#11810 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABGZVWXN2EDAPMCYKKKGI6TZRSXBLAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGMYDMOJYGE
<https://github.com/notifications/unsubscribe-auth/ABGZVWXN2EDAPMCYKKKGI6TZRSXBLAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGMYDMOJYGE>
. You are receiving this because you commented.Message ID: @.**>
So, wasting the time, is this below also wrong github project and not k8s ?
#3511 <#3511>
—
Reply to this email directly, view it on GitHub
<#11810 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWRAQF6KBKP5ZTKXILLZRS4QPAVCNFSM6AAAAABMR7D5KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRGQYTEMZXGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
This is the kubernetes sub project, not the F5 nginx ingress controller or nginx. /close |
Is it possible to configure NGINX to pass client-side certificates through two NGINX servers and send the original client-side certificate to destination app?
I've included a diagram below:
Highlights are:
3.Trusted certs and requisite CA certs are configured.
The ingress object for the application has the following configuration for it.
looking for solution:
Whenever we hit at Nginx-A, the client certificate does show up, however, when the upstream service under Nginx-A is down or exhausted, spill over/failover to Nginx-B happens, everything is confirmed working, however, at Nginx-B, we are not seeing any client certificate passed down. And prints
{\x22error\x22: \x22no client certificate\x22})
Any insight is appreciated w.r.t Nginx-B where we expect to see the client certificate passed down so that mTLS to MTLS execution works properly. The reason for the ask is, if we use "ssl_verify_client" is
optional
, it works fine all the way from client -> Nginx-A (spill over to Nginx-B), however if we setssl_verify_client
to "on", the spill over to Nginx-B will fail withHTTP 400 No required SSL certificate was sent
...The text was updated successfully, but these errors were encountered: