-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port conflict on client with high network utilization #1728
Comments
Hey @dadgar was wondering how this one was going, let me know if you need anything more from me |
Sorry have not yet had a chance to dig into it, hopefully soon |
Hey @jshaw86, Just an update. I tried to reproduce this on both 0.4.1 and master with what you gave me. I went from count = 1 to count = 100, back and forth about 8 times and never hit this. |
@dadgar hmmm and you used the example I gave? Because you have to be doing some outbound tcp/http calls to get it. I will try to spin up the nomad vagrant and reproduce since it's been a week. I'm not sure what to tell you other than we get it a lot and very reproducible. |
@jshaw86 Yeah the steps I did was:
Then I tried with exec as you had it but it was a bit slow to bring them up and down, so then I also tried with raw_exec |
@dadgar another engineer just hit it in our staging:
|
@dadgar did you get up to instance count = 60 with exec before switching to raw_exec? |
@jshaw86 Yeah I was up to 100. If that has just happened can you use If not maybe a dump of all the allocations. I want to know if nomad double assigned that address |
@dadgar nothing listed on lsof for that port, but we were about 5 minutes late, so whatever had it went away. |
@jshaw86 Can you send me this?
If it is sensitive, you have my email |
@dadgar yea, i'm trying to repo on nomad vagrant for you, will try to get a lsof, and that for you |
Awesome! Thanks! Want to resolve this for you guys so you can just cherry
|
@dadgar Hit it pretty easily Count = 60, lsof was empty for the port 43409. As a aside some of the containers were having issues resolving httpbin and failed with I've attached the curl to allocations here: Here's my alloc status and fs
|
@dadgar make sure you up your memory to 4GB so you can run 60 jobs with the cpu/mem requirements of the job |
@dadgar did this do it for you? |
@jshaw86 I'm attempting to reproduce this myself, but I noticed all of your port conflicts have been in the default ephemeral port range ( Do you mind adding those as reserved ports on your client and seeing if that prevents collisions? If it does, the fix might be for nomad to automatically add that range to reserved ports. |
You may need to try on 0.4.0 since there is that other reserve port bug introduced in 0.4.1 |
@schmichael @dadgar I'm on 14.04 rather than 16.04 if it makes a difference. One of our SRE's Mckenzie is going to reply back to your suggestion about the ephemeral port range as he did try that a while back and ran into some issues. I think though there is some difference in ideology on how this should work. Currently in nomad the implementation is to blacklist ports rather than whitelist which is causing the issue. I think the way to avoid this is to give nomad a whitelist of ports and then we change the kernels ephemeral port range outside of what we give nomad. Mckenzie can provide more details as he's more experienced in this area. I think Mckenzie and @diptanu had a chat about this at HashiConf as well along these lines |
@jshaw86 They both should accomplish the same thing |
@schmichael the problem we are faced with, is that with a nomad agent port black list (reserved_ports), you can only tell nomad what ports not to use. This means that the nomad agent still uses the ephemeral port range to dynamically provision ports. This needs to be a white list. Any application that is deployed onto the nomad agent, that uses the ephemeral port range, has the potential to consume a port that was meant to be provisioned by the nomad agent. This is where we get the conflict. We have two buckets of ports, ip_local_reserved_ports (we'll call this local) and ip_local_port_range (we'll call this ephemeral). The local bucket holds a list of ports that we have determined should not be dynamically assigned. These ports are used in configuration files to start application outside of any nomad process. This is how we prevent any containers from getting these ports. All of the containers deployed by nomad use the ephemeral bucket. Any application that is started in the container also uses the ephemeral port range to make outbound connections. Again, our conflict. This is where we are seeing port collision. The nomad agent needs to be able to use a range of ports reserved from the local bucket, for its dynamic port allocation. Then any process in a container that needs to get a port, uses the ephemeral port range. |
@schmichael and @dadgar I think we have some more issues, or at the least, some confusion. From the docs: https://www.nomadproject.io/docs/jobspec/networking.html The dynamic port range is 20000 60000. In testing, setting net.ipv4.ip_local_port_range = 32768 61000 Nomad does not respect the ephemeral port range, scheduling at random from the hard coded range. Here is a list of ports from our test: If you can confirm that nomad does not use the ephemeral range and in fact uses a hard coded (white list) of 20000 60000 (from the kernel docs: "If possible, it is better these numbers have different parity."). All we have to do is use is net.ipv4.ip_local_reserved_ports = 20000-60000 to solve this problem. At this time, this would leave us about 10k ports for outbound connections. I think we can get by with this. But we might want consider making the hard coded range a bit smaller. source: https://github.com/hashicorp/nomad/blob/master/nomad/structs/network.go |
@softslug what you are saying is correct. Additionally if you need more ports you can add the port range 50000-60000 to the port blacklist and now you have 20,000 ports for ephemeral usage (to whatever extreme you want). Nomad can't control what ephemeral port applications launched under it get assigned. That is done by the kernel. So you need to set the net.ipv4.ip_local_reserved_ports |
@jshaw86 @mac-abdon How did it go? |
@dadgar we're building nomad off this: https://github.com/pubnub/nomad/commits/v0.4.2. Includes @diptanu chroot changes and we changed the port range that nomad can assign so we can better tune the kernel, and get around the existing 0.4.1 reserved port bug on AWS. We will hopefully have this deployed with the kernel changes by EOB today. Will close if everything checks out. Thx! |
Any update? |
Yep we deployed to both our staging servers looks like it's resolved going to wait to close till we're in production which will be later this week. We didn't realize the nomad servers did the port assignments so it took us a bit longer to deploy in staging then we wanted to practice the roll before we did it in production to make sure we didn't break anything. Which is why it's taken longer. Thx! |
We've successfully rolled this into most of our production infrastructure. I'm going to close this. If anyone comes across this in the future all we did is change the nomad port range to be the top end of the ephemeral port range, pubnub@7db9c44. Note that 65536 is non-inclusive. Then tuned our sysctl ephemeral port range to be outside of that.
Once this issue is resolved: #1617. You should be able to just specify reserved ports and just change the sysctl ephemeral port range rather than having to do the code change. |
So I think the real outcome of this is that we should make that range configurable and adds some docs on how to avoid this issue. |
@dadgar do you want another ticket for those or are you guys tracking internally? |
Can you open another ticket! |
@dadgar was this change, making the min/max port configurable ever fixed? :) |
@jippi Not yet. Working with a user to test a hypothesis and then will make appropriate changes in Nomad to avoid this problem. It more or less will break down to determining the kernels ephemeral port range and trying to avoid an overlap and also shrinking Nomad's own ephemeral port range. |
@dadgar Met the same problem here. Any update on the change? |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version
Nomad 0.4.1
Operating system and Environment details
Ubuntu 14.04
Issue
At a certain allocation/network request load we get a failure to bind on the specified NOMAD_ADDR port.
Reproduction steps
reserved_ports_issue.zip
@dadgar I've attached a minmal example. When I hit 60 allocations on my vagrant I hit the issue. I've attached the .nomad file and http_server.js. NOTE: you may need to tweak the region and dc's to match your vagrant.
As a side note I did 3 separate runs increasing the Count each time without tearing down the old allocations.
First run: Count = 25
Second run: Count = 40
Third Run: Count = 60
Nomad Client logs (if appropriate)
Below is the results from
nomad status reservedports
andnomad fs
andnomad alloc-status
Job file (if appropriate)
see attached
The text was updated successfully, but these errors were encountered: