-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SplitHTTP h3 h2 multiplex controller #3560
Comments
QUIC transport probably has identical issue: Xray-core/transport/internet/quic/dialer.go Lines 151 to 161 in a0040f1
|
这个是 common/net/cnc/connection.go 下的 |
即使实现了 ReadFrom 和 WriteTo, func (m *connMultiplexer) AddConn(c indexableConn) {
m.mutex.Lock()
defer m.mutex.Unlock()
connIndex := m.index(c.LocalAddr())
p, ok := m.conns[connIndex]
if ok {
// Panics if we're already listening on this connection.
// This is a safeguard because we're introducing a breaking API change, see
// https://github.com/quic-go/quic-go/issues/3727 for details.
// We'll remove this at a later time, when most users of the library have made the switch.
panic("connection already exists") // TODO: write a nice message
}
m.conns[connIndex] = p
} 或许给 local 随便填个值骗一下它?此外我不确定 cnc 的另一端是否知道这是 UDP 而不是 TCP, |
其实这个问题可以以后解决, |
我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?#3560 (comment) 也是出现第二条连接时才 panic,有一点点共通性 |
@dyhkwong 这个问题,应该不是必须手动 OpenStream() 吧 |
SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗 |
可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错( |
According to rfc 9114 section 4.1 only one request can be sent on each stream A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response |
#3565 (comment) 上行那么多 POST 总不能都是开新连接吧,
否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了 |
看了下群,防止误解,这里指的是 Xray 的 MUX over QUIC 的 single stream |
I have only seen this lack of connection reuse with HTTP/1.1. There, it is inherently because of the protocol: A chunked-transfer cannot be aborted by the client without tearing down the TCP connection. Upload was still correctly reused. In h2 it works normally already. I still have to catch up with how QUIC is behaving here, but I think there is no inherent reason related to the protocol. You can try to create a separate RoundTripper for upload and download, to see if GET interferes with the connection reuse of POST. This is how I debugged things in h1. If nobody does it I can take a look next week. |
反正目前“我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?” #3560 (comment) |
Machine translatior misinterpreted my word |
调试了一下代码,发现不是 quic-go 的问题, if isH3 {
dest.Network = net.Network_UDP 导致最后存了之后: globalDialerMap[dialerConf{dest, streamSettings}] = client 下一次开头没能 found: if client, found := globalDialerMap[dialerConf{dest, streamSettings}]; found {
return client
} 不过全部复用一条 QUIC connection 不一定就好,所以我会先 commit,不急着发下一个版本,你们测一下速率有没有差异 |
原来是压根没找到client么( |
但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT, |
好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧, |
分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟 |
很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样( |
So is it maybe the server that enforces this synchronization? |
It is obvious taht the time of the request being send is controlled by the local client |
哪怕是把上下行其中一个替换为h2 仍然有这个行为(( |
又调试了一下代码,过程不表,发现是 SplitHTTP client.go OpenDownload 函数里这一段的问题: trace := &httptrace.ClientTrace{
GotConn: func(connInfo httptrace.GotConnInfo) {
remoteAddr = connInfo.Conn.RemoteAddr()
localAddr = connInfo.Conn.LocalAddr()
gotConn.Close()
},
} H2 时,除了第一次,都会立即回调 GotConn 从而 gotConn.Close(),使 OpenDownload 函数和 dialer.go 的 Dial 函数立即返回 H3 时,GotConn 从未被回调过,导致 c.download.Do(req) 后 OpenDownload 函数才返回,并且没拿到 remoteAddr 和 localAddr quic-go 尚未支持 httptrace:quic-go/quic-go#3342 既然现在 H3 时没拿到 remoteAddr 和 localAddr,先改成直接 gotConn.Close() 避免阻塞,至于拿地址,@mmmray 再研究吧 |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
想了一下,如下设计 SplitHTTP H2 H3 的多路复用控制更合适 首先基础模式二选一:
其次两个限制维度,可以同时生效:
最后,以上选项均填范围,Xray 每次随机选择一个确定值,以淡化潜在特征 这样的话原版的第一个“复用多少次开新连接”也能组合出实现,并且提供了更多可能,欢迎大家提出建议, 目前还想到可以在开新流时选择哪条连接的“算法”上做文章, |
就隔壁那套 max_connections/min_streams/max_streams 感觉就够了 连ray自己的mux甚至都只有一个最大子连接的选项((( |
Dynamic concurrency scaling based on send rate. |
这套机制完善后会给 Xray Mux 也加上, |
想了一下,最适合的是作为第三个层级,也就是作为“开新流时选择哪条连接的‘算法’”加上,这样可以组合出更多可能 |
@mmmray 你有时间实现它们吗,如果没时间我可以写一下 |
The first two bullet points make sense to me, however I think setting "min connections" and "max connections" feels more natural than choosing a "mode" (I think they are equivalent anyway?)
So it is Then there are two more options:
Because of point 2, I don't see an urgent need for it right now, of course if you have the motivation to do it, it's good, it's one step further ahead. I'm a bit too overwhelmed with other tasks right now. I will focus my efforts on improving upload bandwidth, since it's the biggest complaint. I think it's unrelated and won't overlap with your efforts, but not 100% sure. It's stilll not clear to me when I will actually find the time to properly focus though... And yeah, uquic and the dialerProxy issues are also on my list, maybe dialerProxy is not important but I just don't want to keep the PR hanging around (also due to some private conversations where people tried to use it for whatever reason) |
@mmmray "number of times",我看你的英文有个“累计”没被翻译出来,看下这个:子连接数(累计)
|
To ⑥:UDP 肯定能迁移,因为那是 XUDP 自带迁移,TCP 的话取决于是不是连上了 CDN 的同一个节点,切换网络的话就很难说 不过 SplitHTTP 每条连接对应一个 UUID,其实给下行也加上 XUDP 的迁移无成本是因为反正那是 UDP,扔了就扔了,断开连接期间两端有缓存但不多,还不是 XUDP 而是 Xray 机制缓存的 |
话说去年我升级 XUDP UoT Migration 时不是预告过一个 PLUX 嘛,好像以前只公开过一点,干脆不留悬念了。PLUX 在 2021 年时是一个游戏加速器,UDP 上多倍发包是老生常谈,重点是它能在 TCP 类的流(QUIC 也算)上用:同时开多个连接发相同(但当然有 padding)的包,这样一条 TCP 不小心延迟或丢包甚至断开时其它的还 work,可以极大程度提升在 TCP 代理上打游戏的体验。
|
其实就算我经常戏称它们为大饼,隔壁戏称为 ppt,还是与传统意义不同,因为我说的这些设计都是能实现的,而不是虚无缥缈。 不过就像我说过,“Xray-core 一直在按顺序做认为该做的事情”,对于填这些饼我从来没急过, 最后,祝福。 |
注:PLUX 就是以前提过的 Accelerator 加上 TCP 版后的名字,因为后者名字不显著,前者取名 PLUX 是因为它至少 double 了连接 |
@ll11l1lIllIl1lll 我看到了你在群里的发言,你在实现这些 mux 选项吗? |
) XTLS#3613 (comment) Closes XTLS#3560 (comment) --------- Co-authored-by: mmmray <142015632+mmmray@users.noreply.github.com>
Originally this was reported as a panic under #3556, and the changes in there had some effect on this. But slowly the issue became about some unrelated v2rayNG bug. That bug is fixed now, but the dialerProxy issue remains.
configs:
config-sh-h3.json
config-sh-h3-server.json
command to reproduce:
error in the logs when using d8994b7:
when reverting d8994b7, the client crashes instead:
The text was updated successfully, but these errors were encountered: