Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

启动qnsm-inspect后出错 #31

Closed
peiyuefeng opened this issue Sep 1, 2020 · 4 comments
Closed

启动qnsm-inspect后出错 #31

peiyuefeng opened this issue Sep 1, 2020 · 4 comments

Comments

@peiyuefeng
Copy link

peiyuefeng commented Sep 1, 2020

启动命令:“./qnsm-inspect -f ./qnsm_inspect.cfg -c . -p 1”
错误信息:

[APP] Initializing CPU core map ...
[APP] CPU core mask = 0x0000000000000000000000000000003f
[APP] Initializing EAL ...
EAL: Detected 7 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:04.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:05.0 on NUMA socket -1
EAL: probe driver: 8086:100f net_e1000_em
[APP] Initializing MEMPOOL0 ...
[APP] Initializing MEMPOOL1 ...
[APP] Initializing LINK0 (1) (1 RXQ, 0 TXQ) ...
PANIC in app_init_link():
LINK0 (0): init error (-95)
6: [./qnsm-inspect() [0x40f4fd]]
5: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f4756b7bb15]]
4: [./qnsm-inspect() [0x40f3c3]]
3: [./qnsm-inspect() [0x579ab0]]
2: [./qnsm-inspect() [0x40d175]]
1: [./qnsm-inspect() [0x4dc66a]]
Aborted (core dumped)

环境: vmware , 3网卡(intel e1000),采用NAT模式, 0网卡做管理 ,仅绑定1网卡,做dpdk测试。
qnsm_inspect.cfg如下

[EAL]
log_level = 8
n = 1
socket_mem = 1024
master_lcore = 0
[IDPS]
conf_file = ./suricata.yaml
;mbuf mempool cfg
;add mbuf priavte size para
[MEMPOOL0]
buffer_size = 2304
pool_size = 131072
cache_size = 256
cpu = 0 ;socket_id
private_size = 64 ;sizeof(QNSM_PACKET_INFO)
;for ids
[MEMPOOL1]
buffer_size = 2304
pool_size = 131072
cache_size = 256
cpu = 0
private_size = 64
;link cfg
;[LINK0]
;rss_qs = 0
;rss_proto_ipv4 = TCP UDP
;rss_proto_ipv6 = TCP TCP_EX UDP UDP_EX
;symmetrical_rss = yes
;ip_local_q = 7 reserved for future proto stack app
;arp_q = 8
;rx queue cfg
;http://dpdk.org/doc/guides/nics/ixgbe.html
[RXQ0.0]
size = 2048
burst = 32
[SWQ1]
size = 2048
cpu = 1
mempool = MEMPOOL1
dup = yes
;app cfg
[PIPELINE0]
type = MASTER
core = s0c0
[PIPELINE1]
type = SESSM
core = s0c1
pktq_in = RXQ0.0
pktq_out = SWQ0 SWQ1
timer_period = 10
[PIPELINE2]
type = SIP_IN_AGG
core = s0c2
[PIPELINE3]
type = VIP_AGG
core = s0c3
pktq_in = SWQ0
timer_period = 10
[PIPELINE4]
type = EDGE
core = s0c4
;IPS BEGIN
[PIPELINE5]
type = DETECT
core = s0c5
pktq_in = SWQ1
;IPS END

dpdk16.11.2 中,已经 针对vmware做了patch

sed -i "s/pci_intx_mask_supported(dev)/pci_intx_mask_supported(dev)||true/g" ${RTE_SDK}/lib/librte_eal/linuxapp/igb_uio/igb_uio.c

如果打开qnsm_inspect.cfg中的LINK0的rss配置

[LINK0]
rss_qs = 0
rss_proto_ipv4 = TCP UDP
rss_proto_ipv6 = TCP TCP_EX UDP UDP_EX
symmetrical_rss = yes

出错信息变成了

PANIC in app_link_rss_setup():
LINK0 (0): RSS setup error (null RETA size)
7: [./qnsm-inspect() [0x40f4fd]]
6: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f703d1b0b15]]
5: [./qnsm-inspect() [0x40f3c3]]
4: [./qnsm-inspect() [0x5797f8]]
3: [./qnsm-inspect() [0x57729d]]
2: [./qnsm-inspect() [0x40d175]]
1: [./qnsm-inspect() [0x4dc66a]]

@CosmosSun
Copy link
Collaborator

CosmosSun commented Sep 28, 2020

@peiyuefeng
您好,qnsm还没有在vmware环境上测试;
rss的报错看,是dev_info.reta_size为0,确定是e1000网卡吗?

RSS is working as per above but only the E1000, but not for VMXNET3.
具体可以参考:https://communities.vmware.com/thread/540391

另外从dpdk的pmd代码看:
e1000 pmd是支持rss的,eth_igb_infos_get
dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t); dev_info->reta_size = ETH_RSS_RETA_SIZE_128; dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;

vmxnet3没有rss,vmxnet3_dev_info_get。

@peiyuefeng
Copy link
Author

thx much, i will check that again.

@mflu
Copy link
Collaborator

mflu commented Feb 20, 2021

Close it!

@mflu mflu closed this as completed Feb 20, 2021
@ZHAN-MQ
Copy link

ZHAN-MQ commented Jul 23, 2021

Same issue with VirtualBox, and the NIC is e1000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants