-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple IP address per NIC awareness #3333
Comments
After follow-up questions, it seems to be not a bug, but a rare case hit where the IP address situation is strange. |
version:3.3 I have the same error when one more IP (one ip is vip) address in the same network.
@wey-gu Do we have a better way to solve this problem? |
OK, I see, now we have the minimal reproduction procedure, that is, when we have multiple addresses per interface, only one of them was considered as listening/configurable candidates. Before a fix to address this, could you please make vip on IP range other than those for inter-network for nebulaGraph(or you could control the vip and physical ip's order, which seems not possible though, as vip will be floating to always as the secondary one)? |
@Sophie-Xie with the help of @microeastcowboy , now we are able to reproduce this issue. It's related to the side-effect/assumption that each interface comes with only one address, which isn't true. |
Do you have another log which is as below
|
Oh you meant to expect multi-lines to be logged where the other address of the NIC was listed in candidates? Then no, there is only this single line of log and the process wasn't up then.
|
I see, @wey-gu, I will send you a patch later this week or next week, would you help to verify it? |
Sure! Thanks @critical27 , drop me the patch I can verify real quick :) |
@wey-gu Thank you very much for your suggestions and comments. I have make vip on other addresses. |
@critical27 tested the patch with git apply, and it all looks good now. $ ip -f inet addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
inet 10.0.0.4/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.2.4/24 scope global eth0
valid_lft forever preferred_lft forever
$ sudo /usr/local/nebula/scripts/nebula.service restart graphd
[INFO] Stopping nebula-graphd...
[INFO] Done
[INFO] Starting nebula-graphd...
[INFO] Done
$ tail /usr/local/nebula/logs/nebula-graphd.INFO
I20221117 03:16:08.163239 213325 WebService.cpp:124] Web service started on HTTP[19669]
I20221117 03:16:08.163314 213324 GraphDaemon.cpp:136] Number of networking IO threads: 2
I20221117 03:16:08.163331 213324 GraphDaemon.cpp:145] Number of worker threads: 2
I20221117 03:16:08.167465 213324 MetaClient.cpp:80] Create meta client to "127.0.0.1":9559
I20221117 03:16:08.167500 213324 MetaClient.cpp:81] root path: /usr/local/nebula, data path size: 0
I20221117 03:16:08.182987 213324 MetaClient.cpp:3114] Load leader ok
I20221117 03:16:08.183965 213324 MetaClient.cpp:162] Register time task for heartbeat!
I20221117 03:16:08.184466 213324 GraphSessionManager.cpp:331] Total of 0 sessions are loaded
I20221117 03:16:08.185299 213324 Snowflake.cpp:16] WorkerId init success: 1
I20221117 03:16:08.185449 213352 GraphServer.cpp:59] Starting nebula-graphd on 10.0.0.4:9669
$ grep local_ip /usr/local/nebula/etc/nebula-graphd.conf
--local_ip=10.0.0.4
$ date
Thu Nov 17 03:17:02 UTC 2022 |
version: 2.6
In case for some reason, the host comes with multiple IP addresses in the same network, the metaD won't bootup:
https://discuss.nebula-graph.com.cn/t/topic/6540/10
The text was updated successfully, but these errors were encountered: