Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SplitHTTP h3 h2 multiplex controller #3560

Closed
mmmray opened this issue Jul 19, 2024 · 54 comments
Closed

SplitHTTP h3 h2 multiplex controller #3560

mmmray opened this issue Jul 19, 2024 · 54 comments
Labels
enhancement New feature or request

Comments

@mmmray
Copy link
Collaborator

mmmray commented Jul 19, 2024

Originally this was reported as a panic under #3556, and the changes in there had some effect on this. But slowly the issue became about some unrelated v2rayNG bug. That bug is fixed now, but the dialerProxy issue remains.

configs:

config-sh-h3.json
config-sh-h3-server.json

./xray -c config-sh-h3-server.json
./xray -c config-sh-h3.json

command to reproduce:

$ curl -x socks5h://127.0.0.1:2080 ifconfig.me
curl: (52) Empty reply from server

error in the logs when using d8994b7:

transport/internet/splithttp: failed to send download http request > Get "https://127.0.0.1:6001/6e67de80-f752-4df0-a828-3bcc3d1aaaf6": transport/internet/splithttp: unsupported connection type: %T&{reader:0xc0004658f0 writer:0xc000002250 done:0xc0002c84e0 onClose:[0xc000002250 0xc000002278] local:0xc000465890 remote:0xc0004658c0}

when reverting d8994b7, the client crashes instead:

panic: interface conversion: net.Conn is *cnc.connection, not *internet.PacketConnWrapper

goroutine 67 [running]:
github.com/xtls/xray-core/transport/internet/splithttp.getHTTPClient.func2({0x15735c8, 0xc000311ae0}, {0x0?, 0xc00004f700?}, 0xc00031e4e0, 0xc0001e14d0)
        github.com/xtls/xray-core/transport/internet/splithttp/dialer.go:108 +0x145
github.com/quic-go/quic-go/http3.(*RoundTripper).dial(0xc0002f7ce0, {0x15735c8, 0xc000311ae0}, {0xc00034ea30, 0xe})
        github.com/quic-go/quic-go@v0.45.1/http3/roundtrip.go:312 +0x27a
github.com/quic-go/quic-go/http3.(*RoundTripper).getClient.func1()
        github.com/quic-go/quic-go@v0.45.1/http3/roundtrip.go:249 +0x77
created by github.com/quic-go/quic-go/http3.(*RoundTripper).getClient in goroutine 66
        github.com/quic-go/quic-go@v0.45.1/http3/roundtrip.go:246 +0x289
@mmmray mmmray added the bug Something isn't working label Jul 19, 2024
@mmmray
Copy link
Collaborator Author

mmmray commented Jul 20, 2024

QUIC transport probably has identical issue:

var udpConn *net.UDPConn
switch conn := rawConn.(type) {
case *net.UDPConn:
udpConn = conn
case *internet.PacketConnWrapper:
udpConn = conn.Conn.(*net.UDPConn)
default:
// TODO: Support sockopt for QUIC
rawConn.Close()
return nil, errors.New("QUIC with sockopt is unsupported").AtWarning()
}

@RPRX
Copy link
Member

RPRX commented Jul 20, 2024

这个是 common/net/cnc/connection.go 下的 type connection struct,但它还没有实现 net.PacketConn,我写一下

@RPRX
Copy link
Member

RPRX commented Jul 20, 2024

即使实现了 ReadFrom 和 WriteTo,type connection struct 的 local 和 remote 都是 0.0.0.0,最后在 quic-go 这里 panic 了:

func (m *connMultiplexer) AddConn(c indexableConn) {
	m.mutex.Lock()
	defer m.mutex.Unlock()

	connIndex := m.index(c.LocalAddr())
	p, ok := m.conns[connIndex]
	if ok {
		// Panics if we're already listening on this connection.
		// This is a safeguard because we're introducing a breaking API change, see
		// https://github.com/quic-go/quic-go/issues/3727 for details.
		// We'll remove this at a later time, when most users of the library have made the switch.
		panic("connection already exists") // TODO: write a nice message
	}
	m.conns[connIndex] = p
}

或许给 local 随便填个值骗一下它?此外我不确定 cnc 的另一端是否知道这是 UDP 而不是 TCP,从 WG 能工作来看可能是知道

@RPRX
Copy link
Member

RPRX commented Jul 20, 2024

其实这个问题可以以后解决,甚至无需解决,毕竟 SplitHTTP H3 基本上无需结合 dialerProxy,我是原来出站的配置没改好才遇到的

@RPRX
Copy link
Member

RPRX commented Jul 20, 2024

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?#3560 (comment) 也是出现第二条连接时才 panic,有一点点共通性

@RPRX
Copy link
Member

RPRX commented Jul 20, 2024

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

@dyhkwong 这个问题,应该不是必须手动 OpenStream() 吧

@RPRX
Copy link
Member

RPRX commented Jul 20, 2024

SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗

@Fangliding
Copy link
Member

可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错(
还是mux吧

@Mfire2001
Copy link

Maybe quic-go/http3 doesn't support it. I didn't implement stream myself. Reusing earlyConnection or UDPConn will result in an error ( or mux).

According to rfc 9114 section 4.1 only one request can be sent on each stream

A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错(

#3565 (comment) 上行那么多 POST 总不能都是开新连接吧,那也太酸爽了,感觉还是能复用的,“但不知道为什么代理个新连接它就不复用了”,难道是因为 GET?@mmmray what do u think?

还是mux吧

否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了

看了下群,防止误解,这里指的是 Xray 的 MUX over QUIC 的 single stream

@mmmray
Copy link
Collaborator Author

mmmray commented Jul 21, 2024

I have only seen this lack of connection reuse with HTTP/1.1. There, it is inherently because of the protocol: A chunked-transfer cannot be aborted by the client without tearing down the TCP connection. Upload was still correctly reused.

In h2 it works normally already. I still have to catch up with how QUIC is behaving here, but I think there is no inherent reason related to the protocol.

You can try to create a separate RoundTripper for upload and download, to see if GET interferes with the connection reuse of POST. This is how I debugged things in h1. If nobody does it I can take a look next week.

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

I can take a look next week.

你吓我一跳,我看了下日期,原来今天是周日

反正目前“我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?” #3560 (comment)

@Fangliding
Copy link
Member

Maybe quic-go/http3 doesn't support it. I didn't implement stream myself. Reusing earlyConnection or UDPConn will result in an error ( or mux).

According to rfc 9114 section 4.1 only one request can be sent on each stream

A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response

Machine translatior misinterpreted my word
What I'm talking about is opening streams to reuse QUIC connection, not reuse QUIC stream

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗

调试了一下代码,发现不是 quic-go 的问题,多少有点搞笑,SplitHTTP 的 dialer.go 里有处:

	if isH3 {
		dest.Network = net.Network_UDP

导致最后存了之后:

	globalDialerMap[dialerConf{dest, streamSettings}] = client

下一次开头没能 found:

	if client, found := globalDialerMap[dialerConf{dest, streamSettings}]; found {
		return client
	}

不过全部复用一条 QUIC connection 不一定就好,所以我会先 commit,不急着发下一个版本,你们测一下速率有没有差异

@Fangliding
Copy link
Member

Fangliding commented Jul 21, 2024

原来是压根没找到client么(
当初这么写的原因是不这么写下面的dialer不知道应该返回udpConn(

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,准备再次从 WireShark 开始看一下

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,准备再次从 WireShark 开始看一下

好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧,我先睡觉了

@Fangliding
Copy link
Member

我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?

但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,准备再次从 WireShark 开始看一下

好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧,我先睡觉了

分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟

@Fangliding
Copy link
Member

Fangliding commented Jul 21, 2024

下面的wireshark截图

这里是h2 GET和POST被同时发送
image

这里是h3 GET发出后收到服务器的200 OK 才发起POST请求
image

@Fangliding
Copy link
Member

Fangliding commented Jul 21, 2024

很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样(

@mmmray
Copy link
Collaborator Author

mmmray commented Jul 21, 2024

So is it maybe the server that enforces this synchronization?

@Fangliding
Copy link
Member

So is it maybe the server that enforces this synchronization?

It is obvious taht the time of the request being send is controlled by the local client

@Fangliding
Copy link
Member

很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样(

哪怕是把上下行其中一个替换为h2 仍然有这个行为((

@RPRX
Copy link
Member

RPRX commented Jul 21, 2024

分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟

又调试了一下代码,过程不表,发现是 SplitHTTP client.go OpenDownload 函数里这一段的问题:

		trace := &httptrace.ClientTrace{
			GotConn: func(connInfo httptrace.GotConnInfo) {
				remoteAddr = connInfo.Conn.RemoteAddr()
				localAddr = connInfo.Conn.LocalAddr()
				gotConn.Close()
			},
		}

H2 时,除了第一次,都会立即回调 GotConn 从而 gotConn.Close(),使 OpenDownload 函数和 dialer.go 的 Dial 函数立即返回

H3 时,GotConn 从未被回调过,导致 c.download.Do(req) 后 OpenDownload 函数才返回,并且没拿到 remoteAddr 和 localAddr


quic-go 尚未支持 httptrace:quic-go/quic-go#3342


既然现在 H3 时没拿到 remoteAddr 和 localAddr,先改成直接 gotConn.Close() 避免阻塞,至于拿地址,@mmmray 再研究吧

@0ldm0s

This comment was marked as off-topic.

@mmmray

This comment was marked as off-topic.

@0ldm0s

This comment was marked as off-topic.

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

* 多路复用模式,三选一:
* 复用多少次开新连接
* 同时最多多少子连接
* 同时总共多少条连接

想了一下,如下设计 SplitHTTP H2 H3 的多路复用控制更合适

首先基础模式二选一:

  • 同时最多多少子连接(concurrency),先把现有连接复用满再开新的
  • 同时最多多少条 TCP/UDP “连接”,先开新连接直至占满总数再复用

其次两个限制维度,可以同时生效:

  • 一个连接最多被累计复用多少次,默认 0 为不限制
  • 一个连接最大存活时间,默认 0 为不限制

最后,以上选项均填范围,Xray 每次随机选择一个确定值,以淡化潜在特征

这样的话原版的第一个“复用多少次开新连接”也能组合出实现,并且提供了更多可能,欢迎大家提出建议,不然就这么实现了

目前还想到可以在开新流时选择哪条连接的“算法”上做文章,不过还没想好,可以以后再 patch

@Fangliding
Copy link
Member

就隔壁那套 max_connections/min_streams/max_streams 感觉就够了 连ray自己的mux甚至都只有一个最大子连接的选项(((

@PoneyClairDeLune
Copy link
Contributor

Dynamic concurrency scaling based on send rate.

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

就隔壁那套 max_connections/min_streams/max_streams 感觉就够了 连ray自己的mux甚至都只有一个最大子连接的选项(((

我们在为自己的需求设计新的多路复用控制,你给我说抄隔壁,What's your problem?不好意思点了个 down,无个人恩怨

这套机制完善后会给 Xray Mux 也加上,此外 Xray Mux 早就修好了 v2 的遗留问题,群里还有人说,这名声是不是甩不掉了

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

Dynamic concurrency scaling based on send rate.

想了一下,最适合的是作为第三个层级,也就是作为“开新流时选择哪条连接的‘算法’”加上,这样可以组合出更多可能

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

鉴于这个 issue 主要在讨论 h3 multiplex,我修改一下标题,就像上个 issue #3556 也是为了 dialerProxy 而开结果修了 NG 一样

关于 dialerProxy 的讨论可以在 PR #3570 中继续

@RPRX RPRX changed the title SplitHTTP h3 closes connections abruptly when dialerProxy is used SplitHTTP h3 multiplex Jul 24, 2024
@RPRX RPRX added enhancement New feature or request and removed bug Something isn't working labels Jul 24, 2024
@RPRX RPRX changed the title SplitHTTP h3 multiplex SplitHTTP h3 multiplex design Jul 24, 2024
@RPRX RPRX changed the title SplitHTTP h3 multiplex design SplitHTTP h3 h2 multiplex control Jul 24, 2024
@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

@mmmray 你有时间实现它们吗,如果没时间我可以写一下

@mmmray
Copy link
Collaborator Author

mmmray commented Jul 24, 2024

The first two bullet points make sense to me, however I think setting "min connections" and "max connections" feels more natural than choosing a "mode" (I think they are equivalent anyway?)

  • You can set min_connections to N, and xray will immediately open N connections regardless of traffic.
  • You can set min_connections to 0, and xray will open connections lazily.

So it is min_connections/max_connections/max_streams, it feels more consistent with existing mux settings (also from other cores), and I think it covers the same usecases as the "mode".

Then there are two more options:

一个连接最多被累计复用多少次,默认 0 为不限制 (The maximum number of times a connection can be reused. The default value is 0, which means no limit.)
一个连接最大存活时间,默认 0 为不限制 (The maximum survival time of a connection, the default is 0 for no limit)

  • "number of times" I don't understand, how about "number of bytes sent/rcvd"?
  • I understand this is for eliminating long-running connections as a feature. Generally, if the need is censorship-resistance, I think that we should wait until it gets blocked.

Because of point 2, I don't see an urgent need for it right now, of course if you have the motivation to do it, it's good, it's one step further ahead. I'm a bit too overwhelmed with other tasks right now.

I will focus my efforts on improving upload bandwidth, since it's the biggest complaint. I think it's unrelated and won't overlap with your efforts, but not 100% sure. It's stilll not clear to me when I will actually find the time to properly focus though...

And yeah, uquic and the dialerProxy issues are also on my list, maybe dialerProxy is not important but I just don't want to keep the PR hanging around (also due to some private conversations where people tried to use it for whatever reason)

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

@mmmray min_connections 并不能涵盖那些用例,亦不在 other cores 里,它更像是以前设计 switch 时讨论过的“预连接”,即在代理流量前预先开连接放着,以消除任何协议的用户感知 RTT,不过需要空跑一段流量否则会有特征,所以暂时没加,以后再说

"number of times",我看你的英文有个“累计”没被翻译出来,看下这个:子连接数(累计)
"number of bytes sent/rcvd" 是不错的建议,可以分别作为两个限制维度加上

I understand this is for eliminating long-running connections as a feature. Generally, if the need is censorship-resistance, I think that we should wait until it gets blocked.

从体感上来说,过长时间的连接有时只是没有 gets blocked,但是会变得不稳定,所以这是个挺实用的限制维度,要填单位 m h 等

I will focus my efforts on improving upload bandwidth, since it's the biggest complaint. I think it's unrelated and won't overlap with your efforts, but not 100% sure. It's stilll not clear to me when I will actually find the time to properly focus though...

其实我觉得优化上传是最简单的事情,实现我说的那套“每隔多少毫秒上传一次(范围)”几乎不用花时间,所以我都没在讨论它了

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

To ⑥:UDP 肯定能迁移,因为那是 XUDP 自带迁移,TCP 的话取决于是不是连上了 CDN 的同一个节点,切换网络的话就很难说

不过 SplitHTTP 每条连接对应一个 UUID,其实给下行也加上 序号、缓存、ACK 就能实现内层 TCP 流量的迁移,有成本

XUDP 的迁移无成本是因为反正那是 UDP,扔了就扔了,断开连接期间两端有缓存但不多,还不是 XUDP 而是 Xray 机制缓存的

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

话说去年我升级 XUDP UoT Migration 时不是预告过一个 PLUX 嘛,好像以前只公开过一点,干脆不留悬念了。PLUX 在 2021 年时是一个游戏加速器,UDP 上多倍发包是老生常谈,重点是它能在 TCP 类的流(QUIC 也算)上用:同时开多个连接发相同(但当然有 padding)的包,这样一条 TCP 不小心延迟或丢包甚至断开时其它的还 work,可以极大程度提升在 TCP 代理上打游戏的体验。游戏 UDP 包的数据量很小,多倍发也没啥,不是抢占带宽。去年我给还它扩展了些东西但忘了,回头翻翻去年的笔记。

当然可以说大饼或 ppt 加一,然而 XUDP 是 VLESS 刚出时就画的饼到 Xray 才发布,REALITY 是 2021 初的饼到 2023 初才发布。

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

其实就算我经常戏称它们为大饼,隔壁戏称为 ppt,还是与传统意义不同,因为我说的这些设计都是能实现的,而不是虚无缥缈。

不过就像我说过,“Xray-core 一直在按顺序做认为该做的事情”,对于填这些饼我从来没急过,该轮到时自然会轮到。

最后,祝福。

@RPRX
Copy link
Member

RPRX commented Jul 24, 2024

注:PLUX 就是以前提过的 Accelerator 加上 TCP 版后的名字,因为后者名字不显著,前者取名 PLUX 是因为它至少 double 了连接

@RPRX
Copy link
Member

RPRX commented Jul 26, 2024

@ll11l1lIllIl1lll 我看到了你在群里的发言,你在实现这些 mux 选项吗?

@RPRX RPRX changed the title SplitHTTP h3 h2 multiplex control SplitHTTP h3 h2 multiplex controller Aug 16, 2024
@RPRX RPRX closed this as completed in b1c6471 Sep 16, 2024
leninalive pushed a commit to amnezia-vpn/amnezia-xray-core that referenced this issue Oct 29, 2024
leninalive pushed a commit to amnezia-vpn/amnezia-xray-core that referenced this issue Oct 29, 2024
leninalive pushed a commit to amnezia-vpn/amnezia-xray-core that referenced this issue Oct 29, 2024
)

XTLS#3613 (comment)

Closes XTLS#3560 (comment)

---------

Co-authored-by: mmmray <142015632+mmmray@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants