-
Notifications
You must be signed in to change notification settings - Fork 583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NetworkPolicy broken when pods on different nodes #1830
Comments
I tried to put a GlobalNetworkPolicy to get iptables logs, but when applied, all traffic between pods is dropped ! policy :
log :
|
I dig further on this issue, problem only happens on upgraded cluster. I'm able to reproduce it :
I choose this version because it fits our productions cluster, but v1.16.3-rancher1-1 seems affected too. |
This comment has been minimized.
This comment has been minimized.
The same issue after upgrade to 1.17.4 - Rancher 2.4.2. NetworkPolicy broken - work only between pods on same node |
We see the same error when running RKE 1.1.0 with Canal as network plugin. network: Running Ubuntu 18.04 at nodes. We get the same result no matter if we use the service IP or the pod IP. |
|
We get the same result no matter if we use the service IP or the pod IP.
I had problem after upgrading cluster to 1.17 with exist NetworkPolicy. Yes i using Canal. |
I found the bug in my setup. The "from" pod was using hostNetwork: true |
We're running RKE on K8s rev: v1.18.3 using Canal and we're seeing this behaviour out the box, no upgrade. |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
RKE version:
Docker version: (
docker version
,docker info
preferred)Docker version 18.09.4, build d14af54
Operating system and kernel: (
cat /etc/os-release
,uname -r
preferred)Ubuntu 16.04.05 LTS
Kernel 4.4.0-141-generic
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
VSphere
cluster.yml file:
cluster.yml
Steps to Reproduce:
Deploy a cluster in 15.5.6 with default CNI
Add a network policy.
Results:
Only pods on the same node are allowed to reach the selected pod on port 8081.
We face this issue with NetworkPolicy on our development and production since the upgrade, and I can reproduce it on our test cluster.
I'm not sure if it comes from Calico, flannel, kube-proxy...
calico-node shows
IPv6 is disabled on nodes, as suggested in #1606
The text was updated successfully, but these errors were encountered: