Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HELP] 1 node(s) didn't have free ports for the requested pod ports #104

Closed
harshavardhanc opened this issue Sep 9, 2019 · 32 comments
Closed
Assignees
Labels
question Further information is requested

Comments

@harshavardhanc
Copy link

harshavardhanc commented Sep 9, 2019

I'm trying to install istio in k3d cluster, but one of istio component(service load balancer) is failing to start with below error.

Warning FailedScheduling 42s (x6 over 2m59s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

NAME READY STATUS RESTARTS AGE
grafana-6fb9f8c5c7-mr7vb 1/1 Running 0 6m24s
istio-citadel-5cf47dbf7c-jxc4w 1/1 Running 0 6m24s
istio-galley-7898b587db-8jrpq 1/1 Running 0 6m25s
istio-ingressgateway-7c6f8fd795-wl6fn 1/1 Running 0 6m24s
istio-init-crd-10-8qh2j 0/1 Completed 0 26m
istio-init-crd-11-j7glh 0/1 Completed 0 26m
istio-init-crd-12-gvsg6 0/1 Completed 0 26m
istio-nodeagent-clvkf 1/1 Running 0 6m25s
istio-pilot-5c4b6f576b-2b5zf 2/2 Running 0 6m24s
istio-policy-769664fcf7-hj6bn 2/2 Running 3 6m24s
istio-sidecar-injector-677bd5ccc5-wj9zb 1/1 Running 0 6m24s
istio-telemetry-577c6f5b8c-j9dxn 2/2 Running 3 6m24s
istio-tracing-5d8f57c8ff-t7mm4 1/1 Running 0 6m24s
kiali-7d749f9dcb-w7qxr 1/1 Running 0 6m24s
prometheus-776fdf7479-gznbs 1/1 Running 0 6m24s
svclb-istio-ingressgateway-4znth 0/9 Pending 0 6m25s

Please help me in fixing this issue.

@iwilltry42
Copy link
Member

Hey there, thanks for filing this issue.
Can you paste the full output of kubectl describe for the failing pod/deployment?

@harshavardhanc
Copy link
Author

Hey @iwilltry42
Here is the output of the failing pod.

~ k describe pods svclb-istio-ingressgateway-92p9s -n istio-system
Name: svclb-istio-ingressgateway-92p9s
Namespace: istio-system
Priority: 0
Node:
Labels: app=svclb-istio-ingressgateway
controller-revision-hash=597bd7b896
pod-template-generation=1
svccontroller.k3s.cattle.io/svcname=istio-ingressgateway
Annotations:
Status: Pending
IP:
Controlled By: DaemonSet/svclb-istio-ingressgateway
Containers:
lb-port-15020:
Image: rancher/klipper-lb:v0.1.1
Port: 15020/TCP
Host Port: 15020/TCP
Environment:
SRC_PORT: 15020
DEST_PROTO: TCP
DEST_PORT: 15020
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-80:
Image: rancher/klipper-lb:v0.1.1
Port: 80/TCP
Host Port: 80/TCP
Environment:
SRC_PORT: 80
DEST_PROTO: TCP
DEST_PORT: 80
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-443:
Image: rancher/klipper-lb:v0.1.1
Port: 443/TCP
Host Port: 443/TCP
Environment:
SRC_PORT: 443
DEST_PROTO: TCP
DEST_PORT: 443
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-31400:
Image: rancher/klipper-lb:v0.1.1
Port: 31400/TCP
Host Port: 31400/TCP
Environment:
SRC_PORT: 31400
DEST_PROTO: TCP
DEST_PORT: 31400
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15029:
Image: rancher/klipper-lb:v0.1.1
Port: 15029/TCP
Host Port: 15029/TCP
Environment:
SRC_PORT: 15029
DEST_PROTO: TCP
DEST_PORT: 15029
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15030:
Image: rancher/klipper-lb:v0.1.1
Port: 15030/TCP
Host Port: 15030/TCP
Environment:
SRC_PORT: 15030
DEST_PROTO: TCP
DEST_PORT: 15030
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15031:
Image: rancher/klipper-lb:v0.1.1
Port: 15031/TCP
Host Port: 15031/TCP
Environment:
SRC_PORT: 15031
DEST_PROTO: TCP
DEST_PORT: 15031
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15032:
Image: rancher/klipper-lb:v0.1.1
Port: 15032/TCP
Host Port: 15032/TCP
Environment:
SRC_PORT: 15032
DEST_PROTO: TCP
DEST_PORT: 15032
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15443:
Image: rancher/klipper-lb:v0.1.1
Port: 15443/TCP
Host Port: 15443/TCP
Environment:
SRC_PORT: 15443
DEST_PROTO: TCP
DEST_PORT: 15443
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-f5w67:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f5w67
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message


Warning FailedScheduling 82s (x7 over 5m18s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

@iwilltry42
Copy link
Member

Alright, so it seems like one of the Host Port's is already blocked by another pod.
Anyway, this is not a component of istio but an automatic deployment coming from k3s.

@iwilltry42
Copy link
Member

Can you provide more details on
a) how you created the cluster (k3d command)
b) how you deployed istio
please? Just so I can replicate it 👍

@iwilltry42
Copy link
Member

Any news on this @harshavardhanc ?

@tony-kerz
Copy link

experiencing same issue:
mac-os: 10.4.6
docker-desktop: 2.1.0.3

wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
export KUBECONFIG=$(k3d get-kubeconfig)
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.0 sh -
cd istio-1.3.0/
export PATH=$PWD/bin:$PATH
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
bash-4.4$ kubectl get po -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-6fc987bd95-pvg9j                  1/1     Running     1          6h57m
istio-citadel-679b7c9b5b-rmqt6            1/1     Running     1          6h57m
istio-cleanup-secrets-1.3.0-wwnfr         0/1     Completed   0          6h57m
istio-egressgateway-5db67796d5-msz5n      1/1     Running     1          6h57m
istio-galley-7ff97f98b5-n5zng             1/1     Running     1          6h57m
istio-grafana-post-install-1.3.0-mfbnm    0/1     Completed   0          6h57m
istio-ingressgateway-859bb7b4-24l9p       1/1     Running     1          6h57m
istio-pilot-9b9f7f5c8-99mj9               2/2     Running     2          6h57m
istio-policy-754cbf67fb-6x9dl             2/2     Running     7          6h57m
istio-security-post-install-1.3.0-7bh9n   0/1     Completed   0          6h57m
istio-sidecar-injector-68f4668959-274mv   1/1     Running     1          6h57m
istio-telemetry-7cf8dcfd54-tnnbq          2/2     Running     8          6h57m
istio-tracing-669fd4b9f8-gsqm5            1/1     Running     1          6h57m
kiali-94f8cbd99-gfgzl                     1/1     Running     1          6h57m
prometheus-776fdf7479-kv95j               1/1     Running     1          6h57m
svclb-istio-ingressgateway-bkpw8          0/9     Pending     0          6h57m
bash-4.4$ kubectl describe pod svclb-istio-ingressgateway-bkpw8 -n istio-system
Name:               svclb-istio-ingressgateway-bkpw8
Namespace:          istio-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=svclb-istio-ingressgateway
                    controller-revision-hash=688bbd58b
                    pod-template-generation=1
                    svccontroller.k3s.cattle.io/svcname=istio-ingressgateway
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      DaemonSet/svclb-istio-ingressgateway
Containers:
  lb-port-15020:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15020/TCP
    Host Port:  15020/TCP
    Environment:
      SRC_PORT:    15020
      DEST_PROTO:  TCP
      DEST_PORT:   15020
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-80:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       80/TCP
    Host Port:  80/TCP
    Environment:
      SRC_PORT:    80
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-443:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       443/TCP
    Host Port:  443/TCP
    Environment:
      SRC_PORT:    443
      DEST_PROTO:  TCP
      DEST_PORT:   443
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-31400:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       31400/TCP
    Host Port:  31400/TCP
    Environment:
      SRC_PORT:    31400
      DEST_PROTO:  TCP
      DEST_PORT:   31400
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15029:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15029/TCP
    Host Port:  15029/TCP
    Environment:
      SRC_PORT:    15029
      DEST_PROTO:  TCP
      DEST_PORT:   15029
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15030:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15030/TCP
    Host Port:  15030/TCP
    Environment:
      SRC_PORT:    15030
      DEST_PROTO:  TCP
      DEST_PORT:   15030
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15031:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15031/TCP
    Host Port:  15031/TCP
    Environment:
      SRC_PORT:    15031
      DEST_PROTO:  TCP
      DEST_PORT:   15031
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15032:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15032/TCP
    Host Port:  15032/TCP
    Environment:
      SRC_PORT:    15032
      DEST_PROTO:  TCP
      DEST_PORT:   15032
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15443:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15443/TCP
    Host Port:  15443/TCP
    Environment:
      SRC_PORT:    15443
      DEST_PROTO:  TCP
      DEST_PORT:   15443
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-z58mp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z58mp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  5m12s (x96 over 6h57m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  26s (x6 over 4m54s)     default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

@iwilltry42
Copy link
Member

I'm pretty sure, that ports 80 and 443 are already taken by traefik.

@harshavardhanc
Copy link
Author

harshavardhanc commented Oct 15, 2019

Sorry for late reply @iwilltry42 was OOO, I was using k3d create --name cluster_name
You are right @iwilltry42, then I used this command without traefik k3d create --server-arg --no-deploy --server-arg traefik --name cluster_name

@iwilltry42
Copy link
Member

So it works without traefik? Can I go ahead and close this issue then @harshavardhanc ? 👍

@rjshrjndrn
Copy link

@iwilltry42 Any idea why this happens? I think it'll be good to have this in the doc, in case somebody get blocked.

@harshavardhanc
Copy link
Author

yes it works without traefik @iwilltry42

@harshavardhanc
Copy link
Author

+1 @rjshrjndrn

@iwilltry42
Copy link
Member

iwilltry42 commented Oct 19, 2019

@rjshrjndrn yep, it's because of the Service Load Balancer, which reacts to services of type: LoadBalancer.
See the related k3s documentation: https://rancher.com/docs/k3s/latest/en/configuration/#service-load-balancer
If you don't need or want this feature, start the cluster with --server-arg '--no-deploy servicelb'.
Note: the pod that stays in Pending state is part of the k3s infrastructure, not part of the istio manifests/chart which you deployed.

Addition: Do both of you have a k3d cluster created with only a single node? (since the the controller tries to find a node, where the ports are free and obviously, there's none in a single node cluster, where traefik is already running and has the ports occupied)

@iwilltry42 iwilltry42 self-assigned this Oct 19, 2019
@iwilltry42 iwilltry42 added the question Further information is requested label Oct 19, 2019
@iwilltry42 iwilltry42 changed the title 1 node(s) didn't have free ports for the requested pod ports [HELP] 1 node(s) didn't have free ports for the requested pod ports Oct 19, 2019
@rjshrjndrn
Copy link

rjshrjndrn commented Oct 19, 2019

@iwilltry42 I always run a k3d k3d with one node and without traefik.
For type: LoadBalancer, I always gets an ip. Usually I install istio for ingress and tinker over it.

Addition: Do both of you have a k3d cluster created with only a single node? (since the the controller tries to find a node, where the ports are free and obviously, there's none in a single node cluster, where traefik is already running and has the ports occupied)

Why can't traefik run in the same node. I don't think there's any toleration for traefik to run in the master itself.

Note: I tired the cluster with traefik and in one node, for me it works perfectly fine.

@iwilltry42
Copy link
Member

@rjshrjndrn , I'm not sure, I understand you correctly there.
But the issue, shown in the outputs posted here, say that there's a pod svclb-istio-ingressgateway-abcd stuck in Pending state.
This pod is not spawned by the istio manifests, it's spawned by a controller that is part of k3s. This controller is similar to MetalLB and for every kind: Service of type: LoadBalancer that you create in the cluster, it tries to find a node, where it can map the requested port from the node to the pod.
Now, if you create a cluster without the --no-deploy=traefik flag, you'll already have a pod svclb-traefik-abcd with two containers, which use hostPort: 80 and hostPort: 443 (meaning, that ports 80 and 443 on the node are in use).
Unfortunately, the svclb-istio-ingressgateway now needs exactly the same ports, but since those are already taken, it's stuck in Pending state.

@rjshrjndrn
Copy link

Okay. Thank you @iwilltry42 for clarification. Makes sense now.
Basically it's clash of ingress, right !!!

@iwilltry42
Copy link
Member

Are there any questions left here or can I close this issue? 👍

@harshavardhanc
Copy link
Author

You can close this issue @iwilltry42

@Umair841
Copy link

Umair841 commented Apr 2, 2021

anybody here. I have an issue and want discuss it

@rjshrjndrn
Copy link

hi @Umair841 is it something related to port mapping ?

@Umair841
Copy link

Umair841 commented Apr 2, 2021

yes sir. actually, I am using host port mapping with the container port in pods so when my new pod spins up it does not get in a ready state because that port is already taken by another port .. so is there any way so that my new pod gets the host port dynamically while getting up ?? can we add a list of ports in PORT value in service ???

@Umair841
Copy link

Umair841 commented Apr 2, 2021

this is the error actually """"1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
"""

@rjshrjndrn
Copy link

can you gimme the command you use to create the cluster.
Note for code block use backquotes three times.

```
This is my code.
```

@Umair841
Copy link

Umair841 commented Apr 2, 2021

Sir, actually I am new to k8s and this is the first take I have given to troubleshot the error and resolve this.
I am creating a cluster with kops on aws .
command is
kops create a cluster --name umair.k8s.local --zones eu-central-1a --master-size t2.micro --master-count 1 --node-count 1 --node-size t2.micro --Kubernetes-version 1.15.0
and here is the deployment YAML file .


apiVersion: v1
kind: Service
metadata:
name: compliancex-rabbitmq
spec:
selector:
app: compliancex-rabbitmq
ports:
- name: tcp
port: 80
port: 90
targetPort: 80

apiVersion: apps/v1
kind: Deployment
metadata:
name: compliancex-rabbitmq
spec:
selector:
matchLabels:
app: compliancex-rabbitmq
replicas: 1
template:
metadata:
labels:
app: compliancex-rabbitmq
spec:
hostNetwork: true
containers:
- name: compliancex-rabbitmq
image: nginx:stable-alpine
ports:
- containerPort: 80


now I copied this file in the same directory with another name to create a conflict of host ports. now is there any way so that both of the pods gets ready by assigning different host port to both of them .

thanks in advance

@Umair841
Copy link

Umair841 commented Apr 2, 2021

this is the deployment YAML actually.

apiVersion: v1
kind: Service
metadata:
name: compliancex-rabbitmq
spec:
selector:
app: compliancex-rabbitmq
ports:

  • name: tcp
    port: 80
    targetPort: 80
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: compliancex-rabbitmq
    spec:
    selector:
    matchLabels:
    app: compliancex-rabbitmq
    replicas: 1
    template:
    metadata:
    labels:
    app: compliancex-rabbitmq
    spec:
    hostNetwork: true
    containers:
  • name: compliancex-rabbitmq
    image: nginx:stable-alpine
    ports:
  • containerPort: 80

@Umair841
Copy link

Umair841 commented Apr 2, 2021

is there any way to assign random host port automatically to the pod while it is getting up or can we give a list of ports as a value to the PORT in service ??

@rjshrjndrn
Copy link

won't able to help you unless you provide proper yaml. And I asked the command which you used to create the k3d cluster.

is there any way to assign random host port automatically to the pod while it is getting up or can we give a list of ports as a value to the PORT in service ??

Nope. If you're using linux, you should be able to use the loadbalancer ip from the host machine, as the docker network is shared.
Means when you do kubectl get service -n namespace the ip address you get, try with that ip:port combo

@Umair841
Copy link

Umair841 commented Apr 2, 2021

what thoughts you have sir on "dynamic port binding" ??

@rjshrjndrn
Copy link

I don't think that's possible.

@iwilltry42
Copy link
Member

Hi @Umair841 , I moved your question to the discussions feature of this repo, please continue over there to add more details: #551
@rjshrjndrn , thanks for helping out here :)
If I understood it correctly, you're trying to use hostNetwork: true with a dynamically assigned containerPort, which won't work anyway, as due to the hostNetwork: true stanza, all of the ports that your application exposes are opened on the host. So you would have to reconfigure that app to listen on a different port.
In your case it's most probably not working, because it's listening on port 80 (or at least, that's what's specified in the containerPort), which is like the most common port and most probably already used by the Ingress controller.
Please also note, that we cannot give support for things you try with kops, as this is the k3d repo 😉
Answers in the linked discussion please 👍

@msolimans
Copy link

with the new version of k3d, argument name changed - was able to make istio's ingressgateway working by passing --k3s-server-arg '--no-deploy=traefik'

@drummerpva
Copy link

On k3d version 5 flag is --k3s-arg '--no-deploy=traefik@server:*'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants