-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
help request: memory growing up #10392
Comments
cc: @Sn0rt |
Can you continue to observe? It is normal for the memory to grow when NGX is processing requests.
From the information in the picture, it is not possible to determine where the memory leak occurred. |
@Sn0rt |
If oom does occur, it is necessary to check. Thank you for your report. |
How's it going? I'm having the same problem. |
I can't detect any abnormalities in your xray analysis. Some users have also reported #10349 memory leaks before. Combining your information, I very much suspect that the problem was introduced in 3.4; combined with the changelog, I guess it is caused by the modification of etcd. Can you help me test it? (I tried to reproduce it on centos but failed. ) Use the 3.3 etcd file to replace the 3.5 one. Take a look at |
@channer99 Could you send these flame graphs to my email monkeydluffy6017@gmail.com, it's not very clear on the website. |
It doesn't seem to be visible when you click and check it on the web. |
What do you think about this issue? @Sn0rt @monkeyDluffy6017 If a req_body request of size 500~2000 is continuously called, and a req_body request of a large size (6,000,000) is suddenly called, the memory increases and then drops, but does not seem to be fully recovered. |
How about remove the ext-plugin and do a test? |
@channer99 Do you have any progress on this? |
@king4sun what's your APISIX version? |
We have located this problem: #10614, it will affect all versions between 3.4.0 and 3.7.0 |
also affect 3.2.2; but we use 3.2.1. |
@wklken could you open another issue to discuss? |
The problem doesn't exist on version 3.2.2 |
|
the 3.2.2 also merged the pr: #9456; Was it caused not by the PR, but by the bug-fixing PR that followed? |
here: https://github.com/apache/apisix/blob/release/3.2/CHANGELOG.md#bugfix I think it should not be merged into a PATCH version. We upgraded from 3.2.1 and downgraded because the bug #9951 |
I think you are right, it will affect 3.2.2 |
Description
A tendency for memory to gradually increase was confirmed.
checking many issues related to memory(prometheus, ext-plugin, traffic-split, etc...), but i cannot determine the exact cause.
The plugin below is used as a global plugin.
The following plugins were set for individual routes.
also, traffic-split plugin config is used like #10349(weight : 1 , weight : 0)
my config.yaml is
So I obtained a flamegraph through openresty xray.
Can you check where the memory leak occurs?
It cannot be used in the operating environment due to memory increase issues.
need another analyzer(openresty xray) to accurately identify this issues? Please advise.
Environment
apisix version
): 3.4.1uname -a
): Linux 5.4.0-147-genericopenresty -V
ornginx -V
): openresty/1.21.4.2curl http://127.0.0.1:9090/v1/server_info
): 3.5.0luarocks --version
): 3.8.0The text was updated successfully, but these errors were encountered: