-
Notifications
You must be signed in to change notification settings - Fork 780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iptables not updating correctly for HostPort when using CNI chaining Portmap in Cilium #683
Comments
+1 on this issue, experiencing the exact same thing on Google Kubernetes Engine + Dataplane V2 (cilium). When nodes with a DaemonSet using a hostPort are restarted, the iptables entries are duplicated and communication over the hostPort no longer works. cc @parabolic |
Still present with k8s 1.23 from RKE (using Canal) (apparently there were not enough +1 comments posted so the bot closed this...) |
We may still be running into this issue. More details here. Is this still an open issue or have we potentially misconfigured something (or maybe running an older version of something in the critical path)? Any guidance would be much appreciated! |
Whether this issue has been resolved. Can provide a solution? grateful |
As suggested by Cilium maintainers, opening the issue here for Portmap. Original issue: cilium/cilium#18227
Using Cilium as a AWS VPC CNI replacement along with Portmap CNI chaining for HostPort pods:
Situation:
When restarting a pod with a HostPort, in this case listening on port 8126, we noticed that sometimes, the old iptables rule is not deleted, leading to service being unavailable. We ended up with a duplication of iptables rules, pointing to the same port on the old and new pods IP.
Here, our example is a daemonset that we restarted. The previous pod was using IP
10.210.148.146
and the new pod has IP10.210.135.109
. After the restart, the old IP was picked up by an unrelated pod, which doesn't have hostPort on8126
.This issue occurred for every pod in the restarted daemonset.
Here is another example of
iptables -L
on another host.What did we expect?
Upon pod restart, the old iptables rule should be deleted and a new one should be written with the new pod IP.
Quick fix
Flushing the chain
CNI-HOSTPORT-DNAT
on the host and restarting the pod with the hostPort fixed the issue.Kernel Version
5.4.156-83.273.amzn2.x86_64
Kubernetes Version
v1.20.11 in AWS EKS
The text was updated successfully, but these errors were encountered: