Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restart nginx in case of errors in the master process #29

Closed
aledbf opened this issue Nov 25, 2016 · 3 comments
Closed

Restart nginx in case of errors in the master process #29

aledbf opened this issue Nov 25, 2016 · 3 comments
Assignees

Comments

@aledbf
Copy link
Member

aledbf commented Nov 25, 2016

https://github.com/kubernetes/ingress/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go#L101

@aledbf aledbf self-assigned this Nov 25, 2016
@SleepyBrett
Copy link

Here is the pod yaml you asked for:

apiVersion: v1
kind: Pod
metadata:
  annotations:
  generateName: ingress-controller-
  labels:
    k8s-app: ingress-controller
spec:
  containers:
  - args:
    - /nginx-ingress-controller
    - --default-backend-service=kube-system/default-http-backend
    - --nginx-configmap=kube-system/ingress-controller-conf
    - --tcp-services-configmap=kube-system/ingress-controller-tcp
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
    imagePullPolicy: Always
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 18080
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: nginx
    ports:
    - containerPort: 80
      hostPort: 8000
      protocol: TCP
    - containerPort: 443
      hostPort: 8443
      protocol: TCP
    - containerPort: 18080
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 18080
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 200m
        memory: 200Mi
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /etc/nginx/template
      name: ingress-controller-template-volume
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-neshp
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: ip-172-24-224-171.us-west-2.compute.internal
  nodeSelector:
    beta.nordstrom.net/cluster-role: worker
  restartPolicy: Always
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - configMap:
      defaultMode: 420
      items:
      - key: nginx.tmpl
        path: nginx.tmpl
      name: ingress-controller-nginx-template
    name: ingress-controller-template-volume
  - name: default-token-neshp
    secret:
      defaultMode: 420
      secretName: default-token-neshp

@bprashanth
Copy link
Contributor

can you expose that as a liveness probe instead, and get the kubelet to do it automatically? I mean, there's no way you're going to detect all errors that might arise at runtime without spinning off a goroutine, and if you do, you're basically re-implementing something we already expose throug the api

@aledbf
Copy link
Member Author

aledbf commented Nov 30, 2016

fixed in #31

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants