-
-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate of 812. Memory increase after upgrade to 3.4.9 #846
Comments
it's all good. i just referenced it for the sake of completeness. in terms of the problem at hand, i don't have anything to add. i cannot reproduce it (as i illustrated in my load tests). unfortunately, i don't know how to help. |
ok, thanks for clarifying. I'll see if we can find anything our side and report back. |
@rdc-Green I was wondering if you had happened to have made any changes to your database instance just prior to seeing the behaviour reported? Particularly with session timeouts? |
Hi @shadsnz we haven't made any changes to the database instance. |
Hi, I think I have found the issue. I started to see this issue after the update to 3.5.0 from the 3.4.3. Initially I thought it would be because of the database, the MeterValues tables has around 78Gb, it wasn't. Then I started to look at the number of messages per second. Also the zig-zag patters on the heap happen much more frequent on the new versions than on 3.4.3. With this, I started to bisect the git history to find where it the new behaviour was introduced. With a few more digging, I noticed that the Buffer Pools on the JVM grew 8Mb every time a message is received. These two pictures show this behaviour in action. After some tries, I applied this patch on top of 3.5.0.
This seems to bring back the old behaviour. P.S. I am currently run a test with 120 simulated EVSE, reporting MeterValues every second. |
The weekend test went well, and the memory was stable for the whole weekend. After reading a bit more about this, the default value for the MaxTextMessageSize is 65k, and if a message is bigger than this value, it will be rejected. Does anyone know of a situation were this 65k would be a limitation? P.S. Keeping the current MaxTextMessageSize, doesn't seem affect the allocation, and the memory is still stable. |
hey @jpires this is amazing detective work! thanks for your diagnosis!
i remember having issues with one of the messages, which you pointed out, with a station because of which we increased the default. -- it is weird that we had this config for a long time never causing issues... only to be a problem after jetty 10 migration. my changes in 7a41a3f just use the new api. therefore, they are only from syntactical nature. the defaults from back then were 65K were as well. this alone should not have changed the behaviour. i assume during the refactoring of websocket (this is what happened with jetty 10) some behaviour changed or regression was introduced. the regression must be flying under the radar with small size (65K), while having a greater value (8Mb) accentuates the regression. there are a couple of jetty issues that hint at similar memory problems. for example: jetty/jetty.project#6696 and jetty/jetty.project#6328 i get a "rabbit hole" vibe from these jetty discussions and therefore would want to be pragmatic: remove our special settings and fallback to jetty defaults (as is the case in your git diff). i would like your valuable work to be part of this project as an official contribution under your name. can you pls make a PR? i will merge it ASAP. |
I have created a pull request with the changes.
It seems to me that in that before that commit, the I did some digging(very brief) on Jetty code, and those two settings control the size of the buffer used to perform the operation on the "socket". So, it allocates the buffer, read from the socket, copies the read data to a different buffer and deallocated this buffer. I would say that we can probably close this ticket after the pull request is merged. |
Use the default values for input/output buffers #846
fixed by #1058 |
I have the same issue. 3.4.9 memory consumption grows to about 12GB over 6-7 days and then crashes.
It is on my server:
Linux 5.13.0-1025-azure on x86_64
Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz, 4 cores
Virtualmin version 7.1-1
Real Memory 15GB
java -version
openjdk version "11.0.15" 2022-04-19
OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1)
OpenJDK 64-Bit Server VM (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1, mixed mode, sharing)
I have about 20 charge points running, mostly Alfen with one Phihong
After downgrading to an earlier version of SteVe (3.4.6) it is now working properly (average memory 650MB)
The text was updated successfully, but these errors were encountered: