Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with port mapping (docker driver) #507

Closed
adrianlop opened this issue Nov 26, 2015 · 4 comments
Closed

Problems with port mapping (docker driver) #507

adrianlop opened this issue Nov 26, 2015 · 4 comments

Comments

@adrianlop
Copy link
Contributor

Hi there guys,

First of all, thanks for this awesome project and the recent 0.2 release powered with Consul integration.

I'm having problems with port mapping using Docker driver.
I have a test environment with 1nomad server and 2nomad clients.
Just trying the example provided with nomad init, then run nomad run example.nomad, and the job is allocated in one of the clients (the one that has more free resources), and apparently, the port mapping is done, according to the client's DEBUG message:

2015/11/26 16:51:03 [DEBUG] driver.docker: networking mode not specified; defaulting to bridge
2015/11/26 16:51:03 [DEBUG] driver.docker: allocated port 10.120.0.59:46905 -> 6379 (mapped)
2015/11/26 16:51:03 [DEBUG] driver.docker: exposed port 46905

This is the docker ps:

root@host2:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                  NAMES
949e24f502b0        redis:latest        "/entrypoint.sh redis"   44 seconds ago      Up 44 seconds       10.120.0.59:46905->46905/udp, 6379/tcp, 10.120.0.59:46905->46905/tcp   redis-ae8d25bd-1005-ba9f-6a56-bef0535cbc9d

And when trying to reach Redis instance inside docker via 46905 mapped port:

root@host2:~# netstat -an | grep 46905
tcp        0      0 10.120.0.59:46905       0.0.0.0:*               LISTEN     
udp        0      0 10.120.0.59:46905       0.0.0.0:*                          
root@host2:~# telnet 10.120.0.59 46905
Trying 10.120.0.59...
telnet: Unable to connect to remote host: Connection refused

The nomad server and clients are running with root user, and I'm using the config provided in the getting started guide: https://www.nomadproject.io/intro/getting-started/cluster.html (server1.hcl, client1.hcl and client2.hcl -- S1 and C1 in the same host, C2 in another host)

With a simplified scenario (the Vagrant machine provided in the root of the Nomad project), using sudo nomad agent -dev, this is, following step by step the Getting started guide, then nomad init then nomad run example.nomad:

2015/11/26 17:04:12 [DEBUG] driver.docker: networking mode not specified; defaulting to bridge
2015/11/26 17:04:12 [DEBUG] driver.docker: allocated port 127.0.0.1:52465 -> 6379 (mapped)
2015/11/26 17:04:12 [DEBUG] driver.docker: exposed port 52465

vagrant@nomad:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                              NAMES
f4b7690f721a        redis:latest        "/entrypoint.sh redis"   4 minutes ago       Up 4 minutes        6379/tcp, 127.0.0.1:52465->52465/tcp, 127.0.0.1:52465->52465/udp   redis-2f494bef-795d-5f6a-021b-c3dc1ded852d

vagrant@nomad:~$ redis-cli -p 52465
127.0.0.1:52465> CLIENT LIST
Error: Server closed the connection

So, I thought maybe redis instance wasn't working properly, but i logged into the container:

vagrant@nomad:~$ docker exec -it f4b7690f721a /bin/bash
root@f4b7690f721a:/data# redis-cli
127.0.0.1:6379> CLIENT LIST
id=2 addr=127.0.0.1:37807 fd=6 name= age=7 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client

And worked like a charm.

I'm a bit confused since I don't know if something changed with port mapping in the 0.2 Release, or I am missing something from the docs 😢

The only workaround I found is using network_mode = "host" with no dynamic port, so 6379 port gets allocated in the host and in the container because they share the networking, but that's not what I wanted heheh.

Can anyone help please?
Thank you in advance.

@diptanu
Copy link
Contributor

diptanu commented Nov 26, 2015

Hi, this was a bug in the 0.2.0 release, we are doing a 0.2.1 release today which fixes this.

@adrianlop
Copy link
Contributor Author

that's great Diptanu. thank you!!!

keep up the good work :)
El 26/11/2015 20:21, "Diptanu Choudhury" notifications@github.com
escribió:

Hi, this was a bug in the 0.2.0 release, we are doing a 0.2.1 release
today which fixes this.


Reply to this email directly or view it on GitHub
#507 (comment).

@adrianlop
Copy link
Contributor Author

Great, now it works with 0.2.1 release.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants