-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPTables rules missing from Flannel/CNI on Kubernetes installation #799
Comments
@limited Thanks!It works!
|
I see this issue too. How do we make sure those iptables rules run on reboot? |
I have the same issue.But i think it's not a bug of FLANNEL.
It seems docker version >=1.13 will add iptables rule like below,and it make this issue happen:
All you need to do is add a rule below:
|
I'm using Docker 1.12, so I think the behavior must start in an earlier version. Also, I don't think its an acceptable solution to change the default behavior for an IPTables rules. My two rules are a more precise fix. |
The default changed with Docker v1.13 - https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#container-communication-between-hosts It's currently unclear to me how this issue shoudl be fixed. Maybe flannel you automatically change the iptables rules, or just document the docker change, or maybe the bridge CNI plugin should be doing something about it. Also @limited - for NAT you should just pass the |
Thanks will give the ip-masq a shot |
+ some work on flannel integration, not completed yet. See also flannel-io/flannel#799 for an issue why iptables rules need to be changed.
+ some work on flannel integration, not completed yet. See also flannel-io/flannel#799 for an issue why iptables rules need to be changed.
This seems related: containernetworking/plugins#75 |
containernetworking/plugins#75 originates from kubernetes/kubernetes#40182 I belive |
I can confirm this issue with flannel 0.9.0 (both vxlan & host-gw), k8s 1.8.2, docker 17.05 |
To work around the Docker change from v1.13 which changed the default FORWARD policy to DROP. The change has bitten many many users. The troubleshooting documentation is also updated talk about the issue. Replaces PR flannel-io#862 Fixes flannel-io#834 Fixes flannel-io#823 Fixes flannel-io#609 Fixes flannel-io#799
To work around the Docker change from v1.13 which changed the default FORWARD policy to DROP. The change has bitten many many users. The troubleshooting documentation is also updated talk about the issue. Replaces PR flannel-io#862 Fixes flannel-io#834 Fixes flannel-io#823 Fixes flannel-io#609 Fixes flannel-io#799
@tomdee [bbalasubram@cirrus-vm1 Demo]$ docker version Server: |
I see it with 0.10.0 too. And it dosen't work after i apply those iptables rules.
|
I think this issue need to be re-opened. WIth [0], I still need to apply iptables -P FORWARD ACCEPT [0] quay.io/coreos/flannel:v0.10.0-amd64 cc @tomdee |
I see it with 0.10.0 too. |
I was also facing the same, until I allowed "All Traffic" in aws security group. |
Flushed all my firewalls with |
To work around the Docker change from v1.13 which changed the default FORWARD policy to DROP. The change has bitten many many users. The troubleshooting documentation is also updated talk about the issue. Replaces PR flannel-io#862 Fixes flannel-io#834 Fixes flannel-io#823 Fixes flannel-io#609 Fixes flannel-io#799
I fixed it permanently by doing this: |
Modifying the /etc/sysctl.conf made the trick, txs |
For me too, after that the iptables policy for Forward is set to ACCEPT, before that it was DROP and traffic worked only if i set the policy manually to ACCEPT. Is this really the correct solution for this? I would prefer if the policy stays at DROP and appropriate rules allow the traffic needed. |
[UFW BLOCK] IN=flannel.1 OUT=cni0 MAC=72:b6:26:dd:65:45:8a:ad:1c:19:5b:d5:08:00 SRC=10.244.1.8 DST=10.244.0.14 LEN=93 TOS=0x00 PREC=0x00 TTL=62 ID=22646 DF PROTO=UDP SPT=52343 DPT=53 LEN=73 sudo ufw allow in on flannel.1 && sudo ufw allow out on flannel.1 any suggestion? |
The following IP Tables rules are missing, causing routing between nodes to not work properly between containers. I can ping between hosts, but not between containers running on hosts.
Expected Behavior
I expect by default, without special modifications to IPTables to connect to containers running on other flannel nodes (i.e. kube master/api-server and kube-worker).
Current Behavior
IP connectivity between containers running on flannel nodes is broken
Possible Solution
Add iptables rules above
Steps to Reproduce (for bugs)
Install k8s cluster v1.6 using kubeadm with CNI and flannel plugin.
Context
Your Environment
The text was updated successfully, but these errors were encountered: