Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tcpdump -n becomes very slow after some time if large number of IP addresses is present #1136

Open
pspacek opened this issue Feb 23, 2024 · 0 comments

Comments

@pspacek
Copy link

pspacek commented Feb 23, 2024

Version affected:

tcpdump version 4.99.4
libpcap version 1.10.4 (with TPACKET_V3)
OpenSSL 3.2.1 30 Jan 2024

I've noticed that long-running tcpdump -n -r instance becomes very very slow after some time. In my case processing 90 GB PCAP is much faster at the beginning (around 40 MB/s) and progressively slows down. Around 60 GB mark I've noticed something is wrong - it was processing just 5 MB/s and I killed it.

CPU profile captured at the very beginning of capture processing:
svg

CPU profile at 60 GB mark:
svg

My interpretation is that something is wrong with ip6addr_string implementation.

Quick peek into addrtoname.c on commit 2456bbd suggests that fixed-size "hash table" of size 4096 items is probably causing the trouble in case there is a lots of IP addresses - and I'm looking at all traffic from one ISP, so it's not doing any good in my case.

I don't know what design constraints are on tcpdump, so I can't judge if an adaptive hash table resize is all what's needed, or if it will also need sort of LRU mechanism to limit memory usage etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant