Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using flannel, the CoreDNS remains "pending" #2064

Open
lengcangche-gituhub opened this issue Sep 23, 2024 · 3 comments
Open

When using flannel, the CoreDNS remains "pending" #2064

lengcangche-gituhub opened this issue Sep 23, 2024 · 3 comments

Comments

@lengcangche-gituhub
Copy link

lengcangche-gituhub commented Sep 23, 2024

When I use flannel as the pod network of Kubernetes, the coredns pod stays in the "pending" status.

Expected Behavior

The coredns pod should be in the "running" status.

Current Behavior

NAME                                   READY   STATUS    RESTARTS   AGE
coredns-66f779496c-8ddns               0/1     Pending   0          4m19s
coredns-66f779496c-b54jf               0/1     Pending   0          4m19s
etcd-davinci-mini                      1/1     Running   6          4m36s
kube-apiserver-davinci-mini            1/1     Running   6          4m32s
kube-controller-manager-davinci-mini   1/1     Running   7          4m32s
kube-proxy-g7l22                       1/1     Running   0          4m19s
kube-scheduler-davinci-mini            1/1     Running   7          4m32s

Steps to Reproduce (for bugs)

1.kubeadm init --pod-network-cidr=100.100.0.0/16 --image-repository=registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.1.122 (192.168.1.122 is my private IP address, which is available to the cluster nodes)
2.mkdir -p $HOME/.kube
3.sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
4.sudo chown $(id -u):$(id -g) $HOME/.kube/config (steps 2,3,4 are directed by the above kubeadm init command's output)
5.kubectl apply -f kube-flannel.yml (this file is attached)
6.kubectl get pod -n kube-system

Your Environment

  • Backend used:vxlan
  • Kubernetes version: 1.28
  • Operating System and version: Linux davinci-mini 5.10.0+ (on Huawei's NPU Atlas)

Attached: the content of "kube-flannel.yml"

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "100.100.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: rancher/mirrored-flannelcni-flannel:v0.18.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: rancher/mirrored-flannelcni-flannel:v0.18.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
@rbrtbnfgl
Copy link
Contributor

If you do kubectl describe for the coredns pod what do you get?

@lengcangche-gituhub
Copy link
Author

If you do kubectl describe for the coredns pod what do you get?

I'm sorry. There is no flannel pod.

@rbrtbnfgl
Copy link
Contributor

Flannel create its own namespace you should check kube-flannel namespace

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants