You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
would it be just a matter of a custom socket path parameter? or would the host-file-cleanup conflict too? In the latter case (or actually in general) it might make sense to separate the localizer hosts with a comment section like other tools do, e.g.:
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
for multiple instances e.g. the pid of the localizer could be included in the comment section and used as an identifier.
motivation: to me it happens frequently that i connect to a test cluster to test something, but this involves verifying something in the prod database. currently i have to disconnect/reconnect to both the cluster and the services to achieve that.
The text was updated successfully, but these errors were encountered:
Ahhh, right. Multiple sockets will need to be figured out for multiple instances. The code is written that multiple should be able to be run (w/ different CIDRs and cluster domains, but the socket path is an issue.). I think we can get around this with yet another CLI arg 😄
this would be useful to connect to multiple clusters, currently the tool bails out with:
ERRO[0000] failed to run: localizer instance already running
would it be just a matter of a custom socket path parameter? or would the host-file-cleanup conflict too? In the latter case (or actually in general) it might make sense to separate the localizer hosts with a comment section like other tools do, e.g.:
for multiple instances e.g. the pid of the localizer could be included in the comment section and used as an identifier.
motivation: to me it happens frequently that i connect to a test cluster to test something, but this involves verifying something in the prod database. currently i have to disconnect/reconnect to both the cluster and the services to achieve that.
The text was updated successfully, but these errors were encountered: