-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak on "observed" node #3338
Comments
Client config and server config just as the template required. Why ignore it |
I use split and complex routing (homeland via vps balancer) on the client, it will be difficult to give full conifguration, I will give part of the config on the problem: outbounds for observed node
observer config
balancer
"observed" nodes
routing:
out:
I assume polls need to be done less frequently than the timeout policy.... (using default values for policy). |
Attempting to remove unnecessary options and routes to provide a complete configuration, it is difficult to determine the problem if it cannot be reproduced |
what if you change observatory to burstObservatory config? |
|
It seems the problem was that I have an entry "geosite:instagram" in the routing section:
I have changed the ProbeURL to a different address that is not in the routing lists through balancer outbounds and the problem on the observed nodes is not reproduced. |
@sakontwist interesting, sounds like the root cause is you created an infinite loop. |
Even if it leads to a loop, it should still occur on the client side. Therefore, this observers requests is different from normal requests, leading to memory leaks on the server side (such as mux sessions that may not close properly) |
I've done some experiments and the conclusions are as follows:
I disabled mux.cool on XRAY - XRAY trunks for testing (it's not very convenient, because RKN detects servers also by the number of sessions) and the problem doesn't appear until clients connect. I know that mostly clients use nekobox and v2rayng. I've been testing v2rayNG and noticed in the server logs that even when using vless (rprx), mux.cool calls come in if a device makes an udp request (e.g. dns). I managed to forbid mux only after enabling mux in the client by setting “-1” (I apologize for writing here and not to v2ray developers - it's for example). So far I've recommended to clients not to use mux if possible, but apparently someone is using it because the problem comes back periodically. I'll try to pick a window so there is no client activity and confirm there is no problem on a clean xray to xray connection. |
Yesterday I ran a simple scheme with two Xray 1.8.11 45ab4cb (go1.22.2 linux/amd64), with no extraneous clients. After enabling mux, the server started eating memory. On another similar trunk, the result was the same: server
client
|
Hello everyone, Let me start by saying that I have read all the articles and issues on similar topics, but I haven’t found a definitive and clear solution to this problem. Therefore, I decided to write here, hoping I’m not bothering developers or users too much. The issues described in #3221 and #3338 do not provide a clear answer to the problem of memory leaks or OOM (Out of Memory). It’s also unclear whether these issues discuss the same problem or different ones. Perhaps this can revive discussions on the topic. Thank you in advance! Xray Core version - 24.11.11 |
I turned on “observer” on April 30 and memory leak appeared on the nodes being "observed". The nodes under "observing" are using a primitive configuration, no DNS with one direct outbound.
RSS xray graph (green):
What debugging information needs to be attached?
The text was updated successfully, but these errors were encountered: