-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPV6 check doesn't work as expected on AWS EKS #2787
Comments
That sounds really annoying! Either we can add a --ipv4 flag to force v4. But I'm thinking: Wouldnt the best be to open two ports? At least on the server side it should be easy (accept messages on both, respond on same). Client side might be sligthly more messy because we'd have to decide which one to send on. Anyways, a PR for the above would be amazing, because I might not get around to it myself right now :) |
Thanks for the quick response. I was talking to a colleague of mine today about the situation with other hyperscalers, in particular GKE. This supports a dual stack for the service resource just as it should. So my feeling is that this is a bit of an edge case where something like your first suggestion, that of just specifying an argument to force only ipv4, would be a reasonable solution. I mean, I labeled this as a bug, but it makes perfect sense to assume that if an OS supports dual stack then it should be able to communicate with the master with either of the two protocols and to default to ipv6. Furthermore, I don't see an elegant way to handle the client, as you pointed out, because it simply can't know until it tries and fails with ipv6. So, if you are OK with this, I can create a small PR for just the simpler solution which would be appreciated probably only by those who need to run locust on EKS ;-). |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
I had a vacation in the middle and rather busy with work. I hope to get a PR done within a week. |
I have a small update on this. After attempting to adjust the code a bit by adding an option of "ipv4-only", I didn't particularly like what I was seeing. I needed to add an extra parameter to both Server and Client class init methods and all the consequences (there are no options available inside the rpc subdirectory). This can be done, but I had another idea and I wouldn't mind your feedback before implementing. What if we made it so that if one sets "master-bind-host" to an IPv4 address, then we do not enable IVP6? I tried setting "0.0.0.0" to this variable but currently this doesn't do it. However I can make a check to verify if it is an IPv4 address, and if so not enable IPv6 in zmqrpc.py (the default for this variable at the moment is "*"). Thoughts? |
That sounds like a reasonable way to do it, I'm completely fine with it! |
Fixed in #2923 |
Prerequisites
Description
In the current implementation, HAS_IPV6 is being used to determine if the host has an ipv6 stack and if it does, to use it. This unfortunately prevents locust.io from working in a standard EKS cluster in AWS. This is because though the container supports IPV6, the service resource in EKS does not support dual stack (as per documentation of AWS: https://docs.aws.amazon.com/eks/latest/userguide/cni-ipv6.html) and only supports IPV4 (or only IPV6 if you disable IPV4 which is of course not the default). Thus rpc between master and workers does not work. I removed the below lines from the code, rebuilt the container, and now it works. I would suggest, instead of the code trying to guess what stack to use, to actually be able to define it with an environmental variable.
Either way, even if this isn't deemed important enough to address, I leave this here so others can hopefully find this useful.
Command line
locust -f mylocustfile.py -
Locustfile contents
Python version
3.11.9
Locust version
2.29.1
Operating system
Linux 5.15.0-1056-aws
The text was updated successfully, but these errors were encountered: