-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[T2][202405] Zebra process consuming a large amount of memory resulting in OOM kernel panics #20337
Comments
This is the output from Cisco Chassis LC0:
|
Thanks for the output @anamehra
Arista
I'm guessing the difference in |
We tried patching #19717 into 202405 and we saw that the amount of memory Zebra used was significantly reduced:
This explains why |
@lguohan, @StormLiangMS, @dgsudharsan for viz.. |
Output from Nokia LC: docker exec -it bgp1 ps aux | grep frr issue exists with master too, but seems less than 2405 |
Attaching the full output of show memory |
#19717 merged to 202405. |
On full T2 devices in 202405 Arista is seeing the zebra process in FRR consume a large amount of memory (10x compared to 202205).
202405:
202205:
This results in the system having very low amounts of free memory:
If we run a command which causes zebra to consume even more memory like
show ip route
it can cause kernel panics due to OOM:When we look at the
show memory
inFRR
we see the max nexthops is significantly higher on202405
than202205
.202405:
202205:
NOTES:
-both
202205
and202405
have the same number of routes installed-we also seen an increase on
t2-min
topologies but the absolute memory usage is at least half of whatT2
is seeing so we aren't seeing OOMs ont2-min
-the FRR version changed between
202205
=FRRouting 8.2.2
and202405
=FRRouting 8.5.4
The text was updated successfully, but these errors were encountered: