-
Notifications
You must be signed in to change notification settings - Fork 758
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS_VPC_K8S_CNI_RANDOMIZESNAT prng not working in amazon-k8s-cni:v1.4.1 (and 1.5+) #662
Comments
Hi @jraby, yes this is the expected behaviour, AFAICT from the code: amazon-vpc-cni-k8s/pkg/networkutils/network.go Lines 349 to 357 in 3aacadc
You should see a warning logged about falling back to "random" instead of "random-fully". As for when the AL2 iptables package will be updated to a more modern version, I don't know the answer to that. Perhaps @stewartsmith would be able to answer that. |
In the end we rolled our own image with the following pseudo diff:
One needs to be aware that iptables >=1.8.0 defaults to nftables (so does iptables shipped with centos:8), which will break if used on a node where Fun times. |
We need awslabs/amazon-eks-ami#380 done to be able to resolve this issue. |
Fixed in v1.5.7 |
I also got the same issue on the newest eks 1.16 with CNI 1.6.1 and my kube-proxy 1.16.8 keeps show log :
is there any way to fix it? |
@mogren the same issue shows up in eks 1.16 with cni 1.6.2 and kube-proxy v1.16.8 |
The iptables version shipped in the amazon-k8s-cni image does not support the
--fully-random
SNAT option. This option was introduced in iptables 1.6.something whereas v1.24.x is used in the amazon-k8s-cni image. (this seems to be the version from the yum repos from amazonlinux)When setting
AWS_VPC_K8S_CNI_RANDOMIZESNAT
toprng
using this image, it falls back to the hash random method.Is it expected that fully-random is not working out of the box on the official image? And is there a way to get the
--fully-random
behavior using the official images?Thanks.
(edit: clarified question)
The text was updated successfully, but these errors were encountered: