You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, locally, we have a redis container and within our docker-compose file for redis-commander, we have
environment:
- REDIS_HOSTS=local:redis:6379
and all is well.
Also, locally, we can make a SSH tunnel to any of our production/staging/testing remote Redis instances, so 127.0.0.1:49004 for example. As a Mac user, I use SSH Tunnel Manager and so have a button for each tunnel I want to make - others use parameter shell script and some just use the ssh command manually. The end result is always the same. An SSH tunnel on the host with a port mapping in the region of 49000-49999 to one of the remote Redis servers.
I think (in an abstract sense) there are 2 ways to solve this.
Get the container running redis commander to create the SSH tunnel rather than the host. This would involve injecting the relevant private keys into a third party container, so probably not what most people would consider a safe approach.
Have the redis-commander container access ports on the host. I think this is more doable, but I don't know exactly how.
Is this possible?
The text was updated successfully, but these errors were encountered:
Hi,
I do not now about Mac and Windows, but with Linux you can create firewall rules to redirect traffic on your host as you like.
With iptables you can create NAT rules inside the PREROUTING chain to forward all traffic going to one port of your choice from one of your local IPs the docker container can reach to the localhost endpoint of the SSH tunnel like described here: https://superuser.com/questions/661772/iptables-redirect-to-localhost
Another possibility - do not create the SSH tunnels on your host with a localhost endpoint but on another interface.
Either use the interface created by Docker for that (now all container may access them - on Linux "docker0" or similar) or create you local tunnel endpoint on your public interface (but do not forget to add firewall rules to stop your co-workers from using it, allow the running docker image only :-) ) Now you have a valid ip address you can connect to with Redis Commander.
As i said, for Windows and Mac this has to be adopted but should work there too...
Ah. So if I create an SSH tunnel within a container in Docker, other containers can use that tunnel ... which is sort of obvious really as a container with a service (php-fpm for example) exposes a port for nginx to interact. It's just if that port is also shared out in docker-compose (or equivalent) that the host can see it!
The additional ip in front of the local port must be given explicit. Without this fourth parameter the port 49004 is bound to localhost only.... (man ssh). You can either use an explicit ip here or the generic 0.0.0.0 to bind to all ips of the container.
So, locally, we have a redis container and within our docker-compose file for redis-commander, we have
and all is well.
Also, locally, we can make a SSH tunnel to any of our production/staging/testing remote Redis instances, so 127.0.0.1:49004 for example. As a Mac user, I use SSH Tunnel Manager and so have a button for each tunnel I want to make - others use parameter shell script and some just use the ssh command manually. The end result is always the same. An SSH tunnel on the host with a port mapping in the region of 49000-49999 to one of the remote Redis servers.
I think (in an abstract sense) there are 2 ways to solve this.
Is this possible?
The text was updated successfully, but these errors were encountered: