Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unix-socket and proxy_protocol #482

Open
monoflash opened this issue Dec 19, 2024 · 21 comments
Open

unix-socket and proxy_protocol #482

monoflash opened this issue Dec 19, 2024 · 21 comments

Comments

@monoflash
Copy link

It is not possible to create a configuration that is convenient enough to use.
If you set sslh after nginx, the ip addresses are not visible in the logs, since she does not understand proxy_protocol.
If you install sslh before nginx, then the server is in serious danger, sslh performance is seriously inferior to nginx and the same problem with ip addresses.
Connectivity of services over TCP/IP is also not beneficial, as resources (ports+open file) and time (+1ms per connection are spent on connections, since sslh cannot listen and transmit over a unix socket.

If you add support proxy_protocol compatible with nginx and unix-socket to your future plans, this would solve many problems.
proxy_protocol and unix-socket would naturally be useful in all directions, both for receiving and listening and for transmitting.
Is this possible in the future?

@yrutschle
Copy link
Owner

I am not sure that sslh after nginx makes sense, but then I don' t know what proxy_protocol does.

Unix sockets could definitely be added, both on receive and transmit. Do you think it would really improve performance? I somehow assumed that the kernel would be smart to not to actual TCP when connecting to localhost.

@monoflash
Copy link
Author

monoflash commented Dec 19, 2024

I am not sure that sslh after nginx makes sense, but then I don' t know what proxy_protocol does.

nginx has on board a fairly powerful toolkit for repelling attacks and protecting conventional streaming tcp/ip connections and even udp.
In addition, nginx has very productive algorithms inside itself that can process a huge number of connections without loading the processor.

I installed sslh before nginx, it was sad, a huge number of processes in memory, communication interruptions, significantly more CPU usage...
Then I replace them, nginx is now looking at the world, and sslh performs the function of a multiplexer, being protected by nginx.

But even if you not use nginx, for example, there is load balancer with more rich functionality - for example HaProxy.

What unites them is that they both know how to work with the proxy protocol.

Yes, Nginx has the ability to work with stream connections and distribute them based on certificate and header data, but of course, nginx does not know everything that sslh can do, which is why I have to use sslh.

But, I can't control access by IP addresses, because only 127.0.0.1 is visible in the sslh logs.
The proxy protocol allows you to transfer data about the client's real IP and port through tcp/ip (raw) connections, the proxy protocol is used in haproxy and nginx.

On the other hand.

If sslh is able not only to receive data through the proxy protocol, but also to transfer data to lower-level services, then services will also be able to see the client's real IP address. This is exactly what the proxy protocol was made for.
The Proxy Protocol is used to transfer data about the client's real IP address, client port, destination IP address and destination port to the next service.

Description of the proxy protocol:
https://www.haproxy.org/download/2.3/doc/proxy-protocol.txt

For example of implementation: apache plugin, proxy protocol implementation:
https://github.com/roadrunner2/mod-proxy-protocol

@monoflash
Copy link
Author

Unix sockets could definitely be added, both on receive and transmit. Do you think it would really improve performance? I somehow assumed that the kernel would be smart to not to actual TCP when connecting to localhost.

To TCP/IP connection in within the server, takes ~1 millisecond and other resources are also wanted for TCP/IP, including busy ports.. A unix-socket connection spends several orders of magnitude less time and no other resources.
Sometimes, under heavy loads, saving 1 milliseconds per connection and saving open ports and file descriptors for the server becomes significant.

But even if you do not need to save money, the organization of all services without using TCP / IP often looks more beautiful, visually and conveniently.

I really hope that you will make unix-socket support.

@yrutschle
Copy link
Owner

a huge number of processes in memory

You definitely want to try sslh-ev (and probably read some of the docs :-) ): it sounds like you used sslh-fork, which, as its name implies, forks a new process for each connection. This is fairly ungood for busy sites. sslh-ev is based on libev and should perform much better.

I can't control access by IP addresses, because only 127.0.0.1 is visible in the sslh logs.

sslh supports transparent proxying, but I don't guess that helps.

I think unix sockets isn't much work at all. I'll need to look into proxy-protocol before having an opinion, but at first sight it might be doable.

@monoflash
Copy link
Author

sslh supports transparent proxying, but I don't guess that helps.

I know about transparent proxy and have tried variants options with it, but this is not what is required, the IP addresses will still be lost.

I'll need to look into proxy-protocol before having an opinion, but at first sight it might be doable.

Yes, this is a really simple proxy protocol and there are many examples of code.

I write code in golang myself and in a few hours without difficulty added proxy protocol support to my applications in AWS.
Proxy protocol works very well add on Amazon AWS (Yes, AWS Application Load Balancers also has proxy protocol support).

@ftasnetamot
Copy link
Contributor

@monoflash:
Can you describe, what exactly you wish to get done? In case nginx is in front, why you don't just terminate the haproxy-protocol connections in nginx and connect the corresponding core protocol to sslh?
Perhaps this little tool might help in the chain: https://github.com/cloudflare/mmproxy
You can use this tool exactly with the same easy configuration for routing, as I have described it here:. The use of iptables for marking packages is not longer needed, because of iproute2 configurations.

According to @yrutschle´s comment:

I somehow assumed that the kernel would be smart to not to actual TCP when connecting to localhost.

When I did some of the research getting chained transparency working, I created the following picture, to help me understanding, whats going on.:
nf-hooks-v3
which is derived from the nftables wiki.
This is my current understanding, how the local connectivity works. It can be proofed by tcpdumping lo, as described in my transparency documentation.

The kernel seems to use several shortcuts, avoids ingres/egress/bridging code, but uses tcp.

@monoflash
Copy link
Author

@monoflash: Can you describe, what exactly you wish to get done? In case nginx is in front, why you don't just terminate the haproxy-protocol connections in nginx and connect the corresponding core protocol to sslh?

My server is already using nginx as a web server. If nginx can work out of the box with tcp/ip streaming, it's stupid to ignore it, which is why nginx...

In short, my goals are:

  1. Multiplexing;
  2. Maximum protection from attacks and scanners;
  3. Display of real IP addresses in logs;
  4. Refusal to use TCP/IP protocol at intermediate steps;

I have already configured and solved the main tasks, but the result was the loss of the real address with the inability to restore it. As well as the forced use of intermediate TCP/IP ports, instead of unix-socket.

My map:
[nginx stream] port 80 -> (IP) [sslh] ->|
|-> (IP) [nginx http] -> unix-socket -> [nginx] http:80
|-> (IP) [nginx http] -> unix-socket -> [nginx] https:443
|-> (IP) [sshd] ssh:22
|-> (IP) [other service1]
|-> (IP) [other service2]

I have already described why I want to abandon intermediate IP addresses, as well as the loss of real IP in the logs is not critical yet, but it is likely to lead to loss of control when attacks occur.

By suggesting that you add features for working with unix-socket and proxy protocol, I wanted to make your product better.

But, I've already written so many explanations that writing a my own multiplexer, replacement sslh for, in golang already seems like a more reasonable idea. And, perhaps, the number of lines of code will be commensurate with this discussion.

I don't write in C language myself and I can't offer a really good pull-request, so it's up to you to decide what you'll do with my suggestion.

@ftasnetamot
Copy link
Contributor

ftasnetamot commented Dec 20, 2024

So all endpoints are your own systems? Looks like only service1 and service2 are then connected via proxy-protocol?
So again my question: Why not just terminate the incoming ha-proxy-protocol in nginx and forward the connection in transparent way to sslh, which is also configured transparent?
In this case you have all IPs in your logs.
I currently still don't understand, why you have the need, to forward ha-proxy-protocol connections after nginx. Maybe, you can explain a little bit more, as I can't see the proxy-protocol session in your map.

I have described one setting with nginx in front and sslh behind here:, however I moved to openresty (fork of nginx) because of the famous lua possibility.

Edit:
The only shortcoming in nginx transparency is, that nginx does not try to keep the source port for the transparent connection.
So all logs after nginx in daisy-chained transparent services show not longer the original source port. If you compile a recent sslh with the last changes from this year, sslh tries to keep that port for the forwarding connection.

@monoflash
Copy link
Author

monoflash commented Dec 21, 2024

So all endpoints are your own systems? Looks like only service1 and service2 are then connected via proxy-protocol? So again my question: Why not just terminate the incoming ha-proxy-protocol in nginx and forward the connection in transparent way to sslh, which is also configured transparent?

This causes a performance and security issue. I mentioned this earlier. It was painful for the server.

In this case you have all IPs in your logs. I currently still don't understand, why you have the need, to forward ha-proxy-protocol connections after nginx. Maybe, you can explain a little bit more, as I can't see the proxy-protocol session in your map.

nginx has many parts, I use two, the one that is an http/https server and the one that can work with streaming tcp/ip traffic. Traffic must be forwarded between them either via unix-socket or IP. All parts of nginx understand the proxy protocol and therefore there is no problem with IP addresses.

stream {

  ## Incomming.
  server {
    listen                               203.0.113.1:80;
    listen                               [2001:DB8::41af]:80;
    proxy_pass                           $upstream_protocol;
    ssl_preread                          on;
    proxy_protocol                       on;
  }

  ## To sslh.
  server {
    listen                               unix:/run/nginx/nginx-sslh.socket proxy_protocol;
    # proxy_pass                           unix:/run/sslh/sslh.socket; ## Unix-socket is not supported.
    proxy_pass                           127.0.0.1:122;
    proxy_protocol                       off; ## The proxy protocol is disabled because it is not supported.
    proxy_socket_keepalive               off; ## The keepalive problems on the sslh -> OFF.
    set_real_ip_from                     unix:;
    access_log                           /var/log/nginx/stream-proxy-sslh.log proxy; # The last place where the real IP is visible.
  }

  ## Http from sslh to nginx http.
  server {
    listen                               127.0.0.1:180;
    proxy_pass                           unix:/run/nginx/nginx-http.socket;
    proxy_protocol                       on;
    proxy_socket_keepalive               on;
    set_real_ip_from                     unix:;
  }

  ## Https from sslh to nginx https.
....
}
tcp        0      0 127.0.0.1:122           0.0.0.0:*               LISTEN      18240/sslh

This is not the whole config, the part with the return from the sslh https traffic is omitted.
It is arranged identical to http, but supplemented with branches based on domains of certificate, also in the streaming part, and then forwarded to the https part of nginx.

IP addresses are lost on those services that do not understand the proxy protocol - it sslh.

This is not an ideal scheme, it is an economical and perfimance scheme. A rather complicated task is being solved using only nginx and sslh. The ideal scheme can be made on the basis of HAProxy. But in any case, IP addresses will be lost, because the multiplexer simply does not support them.

If the configuration and schema are not clear, don't go deeper, this is a really complex and unique scheme for working on one port of more than 3 protocols with branches not only for protocol but also for HTTPS traffic and certificate.

Now my with a choice, either to convince you to make a more functional multiplexer, or to change multiplexer. So far, I've accepted that IP addresses are completely lost on the multiplexer side, but everything works.

@ftasnetamot
Copy link
Contributor

ftasnetamot commented Dec 21, 2024

Ok, as far, as I understand now, you are dealing with incoming pure protocol connections, your nginx is encapsulating the forwarded connection into proxy protocol.
I still don't understand, why you are doing this. If your services are haproxys, they are able to accept most pure protocols.
So why not forwarding the pure protocol transparent to a transparent sslh-ev?

In your Incoming-stanca, the line proxy_protocol on; is unneeded, as this is only used in context of outgoing proxy connections by my understanding of nginx. If you wish to accept incoming proxy_protocol connections you need a listen 203.0.113.1:80 proxy_protocol; line.

In your to-sslh-stanca I would recommend, replacing the line: set_real_ip_from unix:; with proxy_bind $remote_addr transparent;, so sslh will see the right incoming ip address and will log this also accordingly, as nginx is now using this address for the forwarded connection.

In the Http-from-sslh-stanca I see several problems: proxy_socket_keepalive makes no sense here, as this Configures the “TCP keepalive” behavior for outgoing connections to a proxied server and is not used on the receiving server component. Same for proxy_protocol on; like in the initial block. Omit the line set_real_ip_from unix:; and you will see the transparent IP address coming from sslh, if you use transparency there.

And as @yrutschle already mentioned, use sslh-ev, as with that you have not a forked process for each connection.
And: you must use a very recent self-compiled version of sslh, as the changes to make sslh work in daisy-chained settings are very new.

Even, when the forwarded interprocess connections are using tcp, this is not a big performance issue. According to several performance tests it is somewhere around 15%-20% compared to unix sockets. But you have to take into consideration, that rewrapping in ha_proxy_protocol connections consumes additional ressources, while just forwarding packets is easy. So my guess is, that your performance problem just came from to many parallel forked processes and sslh-ev may help you.

@monoflash
Copy link
Author

monoflash commented Dec 21, 2024

In your Incoming-stanca, the line proxy_protocol on; is unneeded, as this is only used in context of outgoing proxy connections by my understanding of nginx. If you wish to accept incoming proxy_protocol connections you need a listen 203.0.113.1:80 proxy_protocol; line.

Wrong. This connection receives requests that have the proxy protocol enabled. If disable the proxy protocol at the entrance of this connection, there will be problems.

In your to-sslh-stanca I would recommend, replacing the line: set_real_ip_from unix:; with proxy_bind $remote_addr transparent;, so sslh will see the right incoming ip address and will log this also accordingly, as nginx is now using this address for the forwarded connection.

In this case, sslh does not respond to requests at all. What you suggest I've tried doesn't work!

In the Http-from-sslh-stanca I see several problems: proxy_socket_keepalive makes no sense here, as this Configures the “TCP keepalive” behavior for outgoing connections to a proxied server and is not used on the receiving server component.

Sure. sslh doesn't support this, so it doesn't work. A comment was left in case I forget and want to optimize in the future... To avoid hitting the wall again.

Same for proxy_protocol on; like in the initial block. Omit the line set_real_ip_from unix:; and you will see the transparent IP address coming from sslh, if you use transparency there.

I can't disable this, then I'll start loosing real IP address not only on the multiplexer, but also on other parts, connections for which are forked and received using nginx.

And as @yrutschle already mentioned, use sslh-ev, as with that you have not a forked process for each connection. And: you must use a very recent self-compiled version of sslh, as the changes to make sslh work in daisy-chained settings are very new.

I wrote that this part of the configuration is omitted so as not to "show" the domain names.
Everything is there and everything is working.
Two sections of the configuration are not shown, there is a distribution of connections by domain and protocol that nginx understands.

map $ssl_preread_protocol $upstream_protocol {
...
map $ssl_preread_server_name $sni_name {
...

Even, when the forwarded interprocess connections are using tcp, this is not a big performance issue. According to several performance tests it is somewhere around 15%-20% compared to unix sockets. But you have to take into consideration, that rewrapping in ha_proxy_protocol connections consumes additional ressources, while just forwarding packets is easy. So my guess is, that your performance problem just came from to many parallel forked processes and sslh-ev may help you.

You misunderstand. The main remaining problem is the loss of real IP addresses.
Сonfiguration works perfectly and, being protected by nginx, copes with everything that is required.


All your recommendations are not working and are not correct.
You correctly wrote that you do not understand the scheme of the service.

The main problem is not to fix what is already configured, tested and working.
The task is to solve two points:

  1. Loss of IP addresses due to the fact that the multiplexer cannot work with the proxy protocol;
  2. It is possible to slightly increased performance using unix socket; Yes, slightly. But also, yes, to increase;

Even if i do everything as you suggest, to make everything work, it will not solve the problem of IP loss in the sslh and performance in any way.
Don't fix something that isn't broken.

If don't plan to add a proxy protocol and the ability to send and receive request via a unix socket, say so.
The issue will be closed.

@yrutschle
Copy link
Owner

FYI I have opened a discussion #483 as I started implementing UNIX sockets -- it should be trivial enough.
(Sorry, I haven't read the last posts yet)

I'll look into proxy_protocol afterwards, as it is independant, but it looks fairly simple as well. Do you know of v1 is still used? From the cursory look I had, I would be more confortable implementing only v2 with binary structures.

@monoflash
Copy link
Author

monoflash commented Dec 21, 2024

FYI I have opened a discussion #483 as I started implementing UNIX sockets -- it should be trivial enough. (Sorry, I haven't read the last posts yet)

I'm really looking forward to the new release.
Thank you very much!

I'll look into proxy_protocol afterwards, as it is independant, but it looks fairly simple as well. Do you know of v1 is still used? From the cursory look I had, I would be more confortable implementing only v2 with binary structures.

Judging by the fact that debugging my configuration, I often saw the text from the proxy protocol version 1 in the logs, it is being used.

It will probably be easier for you to look into the implementation of the proxy protocol on the nginx side, I think a quick look at the code will answer all your questions:

  1. https://github.com/nginx/nginx/blob/930caed3bfc84e43bf4bd034150c17604dc5dc73/src/core/ngx_proxy_protocol.c
  2. https://github.com/nginx/nginx/blob/930caed3bfc84e43bf4bd034150c17604dc5dc73/src/stream/ngx_stream_realip_module.c

In my development, I use this library, it is in the golang language, but golang is very similar to the C language.
https://github.com/pires/go-proxyproto
It works with nginx, HAProxy and AWS LB.

You can look into the files implementing version 1 and version 2, there are comments in the code, it can also help a lot.
v1.go - v1.
v2.go - v2.

I'm looking forward to a positive decision.
Thank you.

@ftasnetamot
Copy link
Contributor

ftasnetamot commented Dec 21, 2024

In your Incoming-stanca, the line proxy_protocol on; is unneeded, as this is only used in context of outgoing proxy connections by my understanding of nginx. If you wish to accept incoming proxy_protocol connections you need a listen 203.0.113.1:80 proxy_protocol; line.

Wrong. This connection receives requests that have the proxy protocol enabled. If disable the proxy protocol at the entrance of this connection, there will be problems.

Unfortunately this answer is wrong:
To accept INCOMING proxy_protocol connections you need to configure:
listen 203.0.113.1:80 proxy_protocol;
see NGINX documentation here:
The line proxy_procol on; is used for a TCP Connection to an Upstream

In your to-sslh-stanca I would recommend, replacing the line: set_real_ip_from unix:; with proxy_bind $remote_addr transparent;, so sslh will see the right incoming ip address and will log this also accordingly, as nginx is now using this address for the forwarded connection.

In this case, sslh does not respond to requests at all. What you suggest I've tried doesn't work!

What I have suggested works. If it does not work for you, you miss the corresponding settings for transparency in y<our operating system. I manage around 10 nginx configurations with this setting.
This configuration is exactly the same, as needed for sslh. You can go either the old fashioned way, using iptables or nftables rules, to add a MARK to packages, so that a later routing rule can make wure, that the packets will really arrive at the next service. Or you can follow the leightweight configuration using ONLY iproute2 routing. No need for firewall hassle in that case.
This document describes what is needed. This works for nginx just the same way.
In this document I describe an sslh-nginx-ssh setting, which is fully transparent, all logs are showing the right ip addresses.

In the Http-from-sslh-stanca I see several problems: proxy_socket_keepalive makes no sense here, as this Configures the “TCP keepalive” behavior for outgoing connections to a proxied server and is not used on the receiving server component.

Sure. sslh doesn't support this, so it doesn't work. A comment was left in case I forget and want to optimize in the future... To avoid hitting the wall again.

Again, this line just has no effect, as this stanca describes an incoming handler, and the proxy_socket_keepalive is only used for OUTGOING connections.

Same for proxy_protocol on; like in the initial block. Omit the line set_real_ip_from unix:; and you will see the transparent IP address coming from sslh, if you use transparency there.

I can't disable this, then I'll start loosing real IP address not only on the multiplexer, but also on other parts, connections for which are forked and received using nginx.
[....]

You misunderstand. The main remaining problem is the loss of real IP addresses. Сonfiguration works perfectly and, being protected by nginx, copes with everything that is required.

You don't lose real IP addresses if you follow the right configuration, it works!

All your recommendations are not working and are not correct. You correctly wrote that you do not understand the scheme of the service.

I dissagree, as mentioned before in detail

The main problem is not to fix what is already configured, tested and working. The task is to solve two points:

1. Loss of IP addresses due to the fact that the multiplexer cannot work with the proxy protocol;

Just set up straight transparent connections, and everything is fine.

2. It is possible to slightly increased performance using unix socket; Yes, slightly. But also, yes, to increase;

Even if i do everything as you suggest, to make everything work, it will not solve the problem of IP loss in the sslh and performance in any way. Don't fix something that isn't broken.

Again NO.

If don't plan to add a proxy protocol and the ability to send and receive request via a unix socket, say so. The issue will be closed.
I don't plan to add anything, as I am just a helpful volunteer, trying to assist you. So I contributed documentation and bugfixes to sslh, exactly to make the fully transparent scenario working, as I needed that for my own nginx based application firewall setting. I have ssh, http, https, smtp, pop3, imap, DoT, DoH and some other protocols running through this combination, all with the right source ip.

Take some time and read the documents here about transparency. The nice effect is, that nginx just works line sslh on this.

So my recommendation is, set up a dummy interface for sslh, configure the right routing rules just together with the interface configuration in /etc/network/interfaces, configure sslh (or sslh-ev) with transparency and you are done. I cases, where you will reconnect from sslh to nginx, the receiving unit of nginx must also be on the dummy interface.

Here is one of my stripped down nginx (openresty, because of lots of lua) configurations:

stream {
  lua_package_path "/etc/openresty/lua/?.ljbc;/etc/openresty/lua/?.lua;;";
  log_format myapp  '[$remote_addr] [$time_iso8601] $status ';


  map $server_port $good_target {
       80          192.168.255.254:80;   #http
      443          192.168.255.254:1443; #sslh->https or ssh or ...
      110          192.168.255.254:110;  #pop
      ## more ports deleted
  }

  map $server_port $bad_target {
       80          192.168.255.254:5080;
      443          192.168.255.254:5443;
      110          192.168.255.254:5110;
      ## more ports deleted
  }

## Main Incoming handler
  server {
    access_log /var/log/nginx/myapp.log myapp;

    listen PUBLIC_IP:80;
    listen PUBLIC_IP:443;
    listen PUBLIC_IP:110;
    ## and 10 more ports
    set $final_destination "";
    ## get information, if ip is blocked
    ## deleted some lua code, setting $final_destination
    ## either to good_target or bad_target, based on fail2ban database result.
    proxy_connect_timeout 5s;
    proxy_timeout 3m;
    proxy_bind $remote_addr transparent;
    proxy_pass $final_destination;
 }

}

http { 
  lua_package_path "/etc/openresty/lua/?.ljbc;/etc/openresty/lua/?.lua;;";
  log_format myapp  '[$remote_addr] [$time_iso8601] $status ';

  server {
    access_log /var/log/nginx/myapp.log myapp;
    listen 192.168.255.254:5443 ssl;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_session_cache shared:SSL2:10m;
    ssl_session_timeout 10m;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "LIST of my ciphers";
    ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
    ssl_dhparam  /etc/openresty/dhparam-nginx.pem;
    ssl_session_tickets off;
    location = / {
      content_by_lua_block {
        ngx.print("500 you are blocked at this webserver\r\n");
        ## additional lua code removed
        return ngx.exit(500)
      }
    }
  }

  server {
    access_log /var/log/nginx/myapp.log myapp;
    listen 192.168.255.254:5080 ;
    location = / {
      content_by_lua_block {
        ngx.print("500 you are blocked at this webserver\r\n");
        ## additional lua code removed 
        return ngx.exit(500)
      }
    }
  }


}

You will see, that the primary stanca listens on the public ip address, while those servers, which are destinations coming from sslh or rerouted from nginx are listening on the same dummy interface, as sslh is listening.

This is my dummy interface with routing rules:

auto dummy0
iface dummy0 inet static
    address 192.168.255.254/32
    pre-up modprobe dummy
    pre-up if [ ! -e /sys/class/net/dummy0 ]; then ip link add dummy0 type dummy ; fi
    pre-up ip rule add from 192.168.255.254 table sslh
    pre-up ip route add local 0.0.0.0/0 dev dummy0 table sslh
    post-down ip route del local 0.0.0.0/0 dev dummy0 table sslh
    post-down ip rule del from 192.168.255.254 table sslh

and finally this line has to go to /etc/iproute2/rt_tables 110 sslh

I know, what I am talking about.

@monoflash
Copy link
Author

monoflash commented Dec 21, 2024

@FastNetMon There are no words...
I don't use transparent...
I told you that everything is working in me. Thank you.

The question concerned the implementation of the unix socket and the proxy protocol on sslh, but you went somewhere into the forest...

@yrutschle
Copy link
Owner

cac7f48 adds support to connect to UNIX sockets. I am not sure it covers all the corner cases (a lot of code assumes connections are IP; Ideally I'll want to do more refactor to abstract out the protocol types). It works with an entry as this:

protocols: (
     { name: "http";  is_unix: true; host: "/tmp/nginx.sock"; port: "";  }
);

port is unused, but... necessary (I haven't looked into why, I think it's the configuration code that has sanity checks... parts of the refactor that might be necessary to do this cleanly :-)

@yrutschle
Copy link
Owner

bf08229 adds support for listening on Unix sockets.
sslh will not unlink an existing socket file, so you have to do it by hand. I am not so sure it should be the server's job to remove the file manually.

@monoflash
Copy link
Author

bf08229 adds support for listening on Unix sockets. sslh will not unlink an existing socket file, so you have to do it by hand. I am not so sure it should be the server's job to remove the file manually.

The server should definitely create and delete a unix-socket itself. This is the behavior of all properly written services.
If the socket file exists, the server should report an error or try delete unix-socket and open new.
The owner of the unix-socket file is a server, so it must do everything by itself.

The behavior is similar to working with a TCP/IP port.
The port is busy - an error.
The port must be opened, and it is being created. It must be closed, it is being deleted.

@yrutschle
Copy link
Owner

That's fair. I complicated myself thinking about a server that would accidently try to unlink another file in case of bad configuration :-) I'll add that later today.

@Saleh-Mumtaz
Copy link

The server should definitely create and delete a unix-socket itself. This is the behavior of all properly written services. If the socket file exists, the server should report an error or try delete unix-socket and open new. The owner of the unix-socket file is a server, so it must do everything by itself.

for me, nginx always fail to delete the socket file. Every reboot and service restart I should delete that manually.

@monoflash
Copy link
Author

monoflash commented Dec 23, 2024

for me, nginx always fail to delete the socket file. Every reboot and service restart I should delete that manually.

On modern linux, the /run directory is located in memory to partially solve problem of reset or kill.
When pid files and socket files remain after the reset.

But I can't confirm this behavior of nginx, I have the latest version of nginx, unix socket is used a lot. Backend services create and delete their own unix sockets.

Nginx does not have any additional scripts for deleting unix sockets whose server is nginx.

I have not tested whether the unix socket files whose server is nginx remain, but even if they do, it overwrites them on startup.


I looked at the code. If I understood everything correctly, then on line 125, before creating a unix socket, nginx deletes the old one, if there is one.

https://github.com/nginx/nginx/blob/930caed3bfc84e43bf4bd034150c17604dc5dc73/src/event/quic/ngx_event_quic_socket.c#L114-L145

And this part of the code is also indicative:
https://github.com/nginx/nginx/blob/930caed3bfc84e43bf4bd034150c17604dc5dc73/src/core/ngx_connection.c#L629-L650

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants