Skip to content
This repository has been archived by the owner on Mar 17, 2024. It is now read-only.

[Bug]使用nginx+grpc做服务时,配合有的客户端读取firstpayload超时的话会导致连接被关闭 #159

Closed
lw4free opened this issue Oct 9, 2022 · 6 comments
Labels
bug Something isn't working

Comments

@lw4free
Copy link

lw4free commented Oct 9, 2022

Describe the bug【描述 bug】
使用vs做服务器端时,客户端使用v2ray和vs,xshell都连接失败,但使用finalshell却没有问题。
使用v2ray做服务器端时,客户端使用v2ray和vs,xshell都是正常的。

To Reproduce【如何复现该bug】

Expected behavior【预期的行为】
xshell连接正常

Envs (please complete the following information):【系统环境】
服务器端服务器版本:almalinux 8.6 vs版本v1.2.4-beta.3 nginx版本:1.22.0

Config file 【配置文件,客户端服务端配置都提供】
vs配置:

[app]
loglevel = 0
logfile = "/var/log/verysimple/vs_log" 

[[listen]]
tag = "grpc1"
protocol = "vless"
host = "127.0.0.1"
port = 8081
version = -1
advancedLayer = "grpc"
path = "1111"
users = [ {user = "2222"}]

[[dial]]
tag = "direct"
protocol = "direct"

nginx配置:


user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip  on;
    resolver 127.0.0.1;
    charset utf-8;

    #include /etc/nginx/conf.d/*.conf;

    map $http_upgrade $connection_upgrade {  
        default upgrade;
        ''      close;
    }

    upstream grpc1.local {
        zone grpc1.local 64k;
        server 127.0.0.1:8081;
        keepalive 600;
    }

    # Settings for a TLS enabled server.

    server {
        listen 443 ssl http2 so_keepalive=on;

        server_name 33333;

        # Load configuration files for the default server block.

        ssl_certificate /etc/ssl/4444/cert.pem;
        ssl_certificate_key /etc/ssl/4444/key.pem;
        ssl_session_timeout 1d;
        ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions
        ssl_session_tickets off;

        ssl_dhparam /etc/ssl/4444/dh.pem;

        # intermediate configuration
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
        ssl_prefer_server_ciphers off;

        # OCSP stapling
        ssl_stapling on;
        ssl_stapling_verify on;

        # verify chain of trust of OCSP response using Root CA and Intermediate certs
        ssl_trusted_certificate /etc/ssl/4444/cert.pem;

        location /1111/Tun {
            if ($content_type !~ "application/grpc") {
               return 404;
            }

            client_max_body_size 0;
            client_body_timeout  1d;
            client_body_buffer_size 512k;
            grpc_pass grpc://grpc1.local;
            grpc_read_timeout    1d;
            grpc_send_timeout    1d;
            grpc_socket_keepalive  on;
            grpc_set_header Connection "";
        }

        location / {
            access_log  /var/log/nginx/access.log  web;

            root /usr/share/html;
            index index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
}

与客户端没有关系。

Debug Log 【Debug日志, 客户端 和 服务端 的 日志 都提供】
服务器端log

{"L":"INFO ","T":"221009 044718.780","M":"New Accepted Conn","connid":653428,"from":"127.0.0.1:42992","handler":"tcp+grpc+vless://127.0.0.1:8081#grpc1"}
{"L":"ERROR","T":"221009 044718.881","M":"Failed in reading first payload, not because of timeout, will hung up","connid":653428,"target":"1.2.3.4:22","error":"body closed by handler"}

与客户端没有关系。
Other 【其他】

【注意,配置文件和客户端服务端配置 太长的话,前后加上三个 `, 如 ```】

@lw4free lw4free added the bug Something isn't working label Oct 9, 2022
@lw4free
Copy link
Author

lw4free commented Oct 19, 2022

服务器端不用grpc,改用ws,使用正常。同时请教如何使用ws+vless 开启fullcone的方法,我在服务器端direct开启fullcone = true并不成功。
服务器端配置

[[listen]]
tag = "proxy1"
protocol = "vless"
host = "127.0.0.1"
port = 8083
version = 1
advancedLayer = "ws"
path = "/***"
early = true
users = [ ***]

[[dial]]
tag = "direct"
protocol = "direct"
fullcone = true

用户端配置

[[listen]]
tag = "tproxy"
protocol = "tproxy"
ip = "0.0.0.0"
port = 3128

[[dial]]
tag = "proxy"
protocol = "vless"
uuid = "***"
host = "127.0.0.1"
port = 6083
version = 1
advancedLayer = "ws"
path = "/***"
early = true

@e1732a364fed
Copy link
Owner

试试新的代码 解没解决 grpc 的问题

@e1732a364fed
Copy link
Owner

我猜应该不能完整解决,因为读代码又猜到一处可能导致此问题的地方。 可能来源于 吸收过的clash的旧代码。待我再改一下。

@e1732a364fed
Copy link
Owner

改好了,在 1eb4568 , 这回我认为应该搞定了 grpc的问题

@e1732a364fed e1732a364fed changed the title [Bug]使用vs做服务器端时,xshell连接失败 [Bug]使用nginx+grpc做服务时,读取firstpayload超时的话会导致连接被关闭 Oct 31, 2022
@e1732a364fed e1732a364fed changed the title [Bug]使用nginx+grpc做服务时,读取firstpayload超时的话会导致连接被关闭 [Bug]使用nginx+grpc做服务时,配合有的客户端读取firstpayload超时的话会导致连接被关闭 Oct 31, 2022
@e1732a364fed
Copy link
Owner

fullcone 的话,你的客户端的 vless v1 以及 服务端的 direct 都要配置 fullcone,说白了,就是双端的 dial 都要配置 fullcone。

先关了,先当解决了。如果测试发现还是有问题再开

@lw4free
Copy link
Author

lw4free commented Nov 1, 2022

非常感谢

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants