-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about iptable update rules #1965
Comments
Hi. How are you creating the pods? Are you exposing any services with from the pods? I saw that you are using an old flannel and Kubernetes version I don't recall if the MASQUERADE process was different from that old version but I am sure that from the recent versions Flannel is using one single rule based on the Pods CIDR from that node. |
thx for reply.I create service from nacos pod , not the service pods in the picture.And im sure that svc ip range is differ from the cluster ip range.Im not a net engineer ,so i dont really understand how iptable and MASQUERADE process works.Our team used an older flannel version in the past 3 years (im not sure it is 0.7.x ?),this problem never happend. |
If pod go down, flannel expected to take back pod's cluster-ip ,and delete the rule in iptables? |
Flannel shouldn't create any MASQUERADE rule for the pod. |
Well,which process create the rules? |
I know how Flannel creates the rules on the latest versions |
Thank you , we will try running a higher version flannel in our testing env,see if there are similar appearances. |
Expected Behavior
I use Flannel to manage the allocation strategy of cluster IPs in the k8s cluster. During use, it was found that when a pod dies, its cluster IP no longer exists in the cluster, but it can still ping and telnet this IP. After checking the IP rules of the node machine, it was found that the k8s node still retained the posting chain of this IP and did not seem to delete it properly, which led to my microservices mistakenly registering the service(phenomenon : the old cluster ip still remained in nacos registry center).
Current Behavior
Possible Solution
Steps to Reproduce (for bugs)
Context
On k8s master : kubectl get pods --all-namespaces -o wide|grep 172.25.163
ip:172.25.163.7 not exist but it can still ping and telnet this IP
On k8s node (flannel ip:172.25.163.0):iptables -t nat -L -n -v
my etcd config is like:
/coreos.com/network/config
{"Network":"172.25.128.0/17","Backend":{"Type":"vxlan","Directrouting":false}}
I don't know if this phenomenon is normal, but this non-existent IP appears in the nacos registry.
Your Environment
The text was updated successfully, but these errors were encountered: