Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upstream timed out (110: Connection timed out) #3469

Closed
lmishii opened this issue Nov 27, 2018 · 2 comments
Closed

upstream timed out (110: Connection timed out) #3469

lmishii opened this issue Nov 27, 2018 · 2 comments

Comments

@lmishii
Copy link

lmishii commented Nov 27, 2018

NGINX Ingress controller version:
0.20.0

Kubernetes version (use kubectl version):
v1.11.3

Environment:

  • Cloud provider or hardware configuration: hardware machine
  • OS (e.g. from /etc/os-release): ubuntu 16.04
  • Kernel (e.g. uname -a): 4.4.0-62-generic Adding flags from specific backend implementation #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others:

What happened:
I want to build oauth2 proxy environment. But upstream timed out when accessing to http://<my domain>. nginx ingress controller log is as follows.

2018/11/27 07:26:12 [error] 725#725: *2102 upstream timed out (110: Connection timed out) while connecting to upstream, client: 192.168.3.22, server: <my domain>, request: "GET / HTTP/1.1", subrequest: "/_external-auth-Lw", upstream: "http://172.16.99.10:80/oauth2/auth", host: "<my domain>"

192.168.3.22 is IP of my working PC and I can access to http://172.16.99.10:80/oauth2/auth from my working PC (don't timed out). I want to know why upstream timed out happened.

What you expected to happen:
upstream don't time out

How to reproduce it (as minimally and precisely as possible):
metallb installation:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

metallb configration:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.16.99.1-172.16.99.255

sample-webapp deployment:

kubectl create -f https://raw.githubusercontent.com/ahmetb/gke-letsencrypt/master/yaml/sample-app.yaml

oauth2 proxy deployment:

helm install stable/oauth2-proxy --name oauth2-proxy \
--set config.clientID=xxx \
--set config.clientSecret=xxx \
--set config.cookieSecret=xxx \
--set extraArgs.provider=google

nginx-ingress deployment:

helm install --name nginx-ingress stable/nginx-ingress \
--set rbac.create=true \
--set controller.service.externalTrafficPolicy=Local \
--set controller.service.loadBalancerIP=172.16.99.10

dns record:

A <my domain> 172.16.99.10

ingress deployment:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/auth-url: "http://$host/oauth2/auth"
    nginx.ingress.kubernetes.io/auth-signin: "http://$host/oauth2/start?rd=$request_uri"
  name: helloweb
spec:
  rules:
    - host: <my domain>
      http:
        paths:
          - backend:
              serviceName: helloweb-backend
              servicePort: 8080
            path: /
    - http:
        paths:
          - backend:
              serviceName: helloweb-backend
              servicePort: 8080
            path: /

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  name: oauth2-proxy
spec:
  rules:
    - http:
        paths:
          - backend:
              serviceName: oauth2-proxy
              servicePort: 80
            path: /oauth2
    - host: <my domain>
      http:
        paths:
          - backend:
              serviceName: oauth2-proxy
              servicePort: 80
            path: /oauth2

Anything else we need to know:

@lmishii
Copy link
Author

lmishii commented Dec 13, 2018

Sorry I resolved and this is not nginx-ingress issue but hairpin mode issue.
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip
when I use weave as cni. everything works fine!

@lmishii lmishii closed this as completed Dec 13, 2018
@dsx
Copy link

dsx commented May 22, 2020

Sorry I resolved and this is not nginx-ingress issue but hairpin mode issue.
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip
when I use weave as cni. everything works fine!

It seems that --masquerade-all=true also needs to be enabled for this to work. Adding this here because this issue comes up in web search.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants