Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update netlink messages handler #2233

Merged
merged 6 commits into from
May 25, 2022

Conversation

liorghub
Copy link
Contributor

@liorghub liorghub commented Apr 18, 2022

What I did
Ignore netlink DELLINK messages if port has master, this is applicable to the case where port was part of VLAN bridge or LAG.

Why I did it
Netlink messages handler in portsyncd was ignoring all messages that had master.
Therefore we ignored messages on interfaces that belong to LAG (not only interfaces belong to bridge as intended).
The result was "netdev_oper_status" down in PORT_TABLE in state DB for port which is part of LAG although it is actually up.

How I verified it
Check "netdev_oper_status" in PORT_TABLE in state DB for port which is part of LAG.

Details if related

@liorghub liorghub requested a review from prsunny as a code owner April 18, 2022 19:57
@zhenggen-xu
Copy link
Collaborator

@liorghub Thanks for the fix. Can you please add vs tests to cover the cases where the port being part of Bridge or LAG?

if (master)
{
return;
LinkCache &linkCache = LinkCache::getInstance();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, this would be handled by teamsyncd. Can you check?

Copy link
Contributor Author

@liorghub liorghub Apr 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@prsunny I checked, teamsyncd is handling messages being sent for the port-channel interface itself, those messages are marked with type="team". The bug I fixed concerns the handling of messages for ports that belongs to port-channel. These messages are not marked with type="team".

Copy link
Collaborator

@prsunny prsunny Apr 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, @judyjoseph , can you check this? This seems to be basic change and missed. @liorghub, What is the functional impact?

Copy link
Contributor Author

@liorghub liorghub Apr 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The functional impact is in LLDP, there we check state DB PORT_TABLE for "netdev_oper_status" up before sending LLDP commands. If "netdev_oper_status" is down, LLDP command is not being sent causing wrong LLDP behavior.

See the following code in lldpmgrd.
https://github.com/Azure/sonic-buildimage/blob/cc30771f6b97234a6dd19d8f97d5dfd44551cf20/dockers/docker-lldp/lldpmgrd#L170

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. lgtm. As Xu suggested, please add VS tests to cover this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, @judyjoseph , can you check this? This seems to be basic change and missed. @liorghub, What is the functional impact?

@prsunny I did a quick check .. noting down the events from syslog. I find that the 'netdev_oper_status' is set much earlier for an interface as long as the interface is connected and up. The teamd member addition happens earlier.

Apr 26 18:33:56.812132 str2---1 NOTICE swss0#orchagent: :- initializePort: Initializing port alias:Ethernet4 pid:1000000000006
Apr 26 18:33:56.817494 str2---1 NOTICE swss0#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet4 admin:0 oper:0 addr:40:7c:7d:bb:26:0b ifindex:22 master:0
Apr 26 18:33:56.817741 str2---1 NOTICE swss0#portsyncd: :- onMsg: Publish Ethernet4(ok:down) to state db
Apr 26 18:33:56.818394 str2---1 NOTICE swss0#orchagent: :- addHostIntfs: Create host interface for port Ethernet4
Apr 26 18:33:56.833381 str2---1 NOTICE swss0#orchagent: :- setHostIntfsOperStatus: Set operation status DOWN to host interface Ethernet4
Apr 26 18:33:56.833450 str2---1 NOTICE swss0#orchagent: :- initPort: Initialized port Ethernet4
Apr 26 18:33:56.897841 str2---1 NOTICE swss0#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet4 admin:1 oper:1 addr:40:7c:7d:bb:26:0b ifindex:22 master:0
Apr 26 18:33:56.898243 str2---1 NOTICE swss0#portsyncd: :- onMsg: Publish Ethernet4(ok:up) to state db
Apr 26 18:33:56.898260 str2---1 NOTICE swss0#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet4 admin:1 oper:1 addr:40:7c:7d:bb:26:0b ifindex:22 master:2
Apr 26 18:33:56.898310 str2---1 NOTICE swss0#portsyncd: message repeated 2 times: [ :- onMsg: nlmsg type:16 key:Ethernet4 admin:1 oper:1 addr:40:7c:7d:bb:26:0b ifindex:22 master:2]
Apr 26 18:33:56.900044 str2---1 NOTICE swss0#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet4 admin:1 oper:1 addr:40:7c:7d:bb:26:0b ifindex:22 master:2
Apr 26 18:33:56.901037 str2---1 INFO kernel: [  140.005295] PortChannel102: Port device Ethernet4 added
Apr 26 18:33:56.901375 str2---1 NOTICE teamd0#teammgrd: :- addLagMember: Add Ethernet4 to port channel PortChannel102
Apr 26 18:33:56.912638 str2---1 NOTICE swss0#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet4 admin:1 oper:1 addr:40:7c:7d:bb:26:0b ifindex:22 master:2

@liorghub could you share a bit more details on when you observe this behavior -- is it seen always with lldp ? for all port channel member interfaces ( or only for interface which were initially oper down, after a while they become oper up as they become part of portchannel ? )

Copy link
Contributor Author

@liorghub liorghub Apr 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@judyjoseph
Hi judy,
Issue happens when switch is booting.
Ethernet0 is part of port-channel.

As you can see below, portsyncd gets several netlink messages for Ethernet0,
The last message that arrives without "master" (master:0) is at 07:19:15.359655 and it is oper down.
Later we get more messages for Ethernet0 with oper up but we ignore them since they are marked with "master".
Interfaces that have master can be either part of vlan bridge or part of port-channel.
We want to ignore only vlan bridge (confirmed with @zhenggen-xu)

Since the last massage for Ethernet0 we handle is with oper down, state DB holds "netdev_oper_status" = "down", this is causing wrong LLDP behaviour.
Issue is persistent and occurs after each reboot.

See below logs:

 root@r-tigon-20:/home/admin# grep -e "nlmsg type" -e Publish  /var/log/syslog  | egrep "Ethernet0"
Apr 28 07:19:15.287582 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:0 oper:0 addr:1c:34:da:c9:60:68 ifindex:77 master:0 type:sx_netdev
Apr 28 07:19:15.287898 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: Publish Ethernet0(ok:down) to state db
Apr 28 07:19:15.291418 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:0 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:0 type:sx_netdev
Apr 28 07:19:15.291972 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: Publish Ethernet0(ok:down) to state db
Apr 28 07:19:15.359292 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:0 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:0 type:sx_netdev
Apr 28 07:19:15.359510 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: Publish Ethernet0(ok:down) to state db
Apr 28 07:19:15.359655 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:0 type:sx_netdev
Apr 28 07:19:15.359866 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: Publish Ethernet0(ok:down) to state db
Apr 28 07:19:15.360309 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:4 type:sx_netdev
Apr 28 07:19:15.360352 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:4 type:sx_netdev
Apr 28 07:19:15.365219 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:4 type:sx_netdev
Apr 28 07:19:15.367925 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:0 addr:1c:34:da:c9:60:00 ifindex:77 master:4 type:sx_netdev
Apr 28 07:19:27.880041 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:1 addr:1c:34:da:c9:60:00 ifindex:77 master:4 type:sx_netdev
Apr 28 07:19:28.011930 r-tigon-20 NOTICE swss#portsyncd: :- onMsg: nlmsg type:16 key:Ethernet0 admin:1 oper:1 addr:1c:34:da:c9:60:00 ifindex:77 master:4 type:sx_netdev

@dprital dprital requested a review from judyjoseph April 26, 2022 07:02
@liorghub
Copy link
Contributor Author

liorghub commented May 1, 2022

@liorghub Thanks for the fix. Can you please add vs tests to cover the cases where the port being part of Bridge or LAG?

@judyjoseph I added vs test as requested.

@liorghub
Copy link
Contributor Author

liorghub commented May 1, 2022

/azpw run Azure.sonic-swss

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-swss

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@liorghub
Copy link
Contributor Author

liorghub commented May 1, 2022

Azure.sonic-swss (BuildArm arm64) is failing in download artifacts.
Download from the specified build: #94906 Download artifact to: /data/myagent/_work/1/s/sonic-buildimage.centec-arm64 Using default max parallelism. Max dedup parallelism: 192 ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session d7dd35bc-155c-4fca-a243-832655dcc403 DedupManifestArtifactClient will correlate http requests with X-TFS-Session d7dd35bc-155c-4fca-a243-832655dcc403 Minimatch patterns: [**] Filtered 719 files from the Minimatch filters supplied. Downloaded 0.0 MB out of 5,916.6 MB (0%). Downloaded 362.1 MB out of 5,916.6 MB (6%). ##[error]Exit code 139 returned from process: file name '/data/myagent/bin.2.200.2/Agent.PluginHost', arguments 'task "Agent.Plugins.PipelineArtifact.DownloadPipelineArtifactTaskV2_0_0, Agent.Plugins"'.

@liorghub
Copy link
Contributor Author

liorghub commented May 1, 2022

/azpw run Azure.sonic-swss

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-swss

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@liorghub
Copy link
Contributor Author

liorghub commented May 2, 2022

The following tests failed, trying to rerun.

test_NeighborAddRemoveIpv6LinkLocal
test_PortMirrorToLagAddRemove
test_RouteAddRemoveIpv4Route
test_RouteAddRemoveIpv4RouteUnresolvedNeigh
test_RouteAddRemoveIpv4RouteWithVrf
test_RouteAddRemoveIpv4BlackholeRoute

@liorghub
Copy link
Contributor Author

liorghub commented May 2, 2022

/azpw run Azure.sonic-swss

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-swss

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@liorghub
Copy link
Contributor Author

liorghub commented May 8, 2022

/azpw run Azure.sonic-swss

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-swss

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@liorghub
Copy link
Contributor Author

liorghub commented May 9, 2022

/azpw run Azure.sonic-swss

@mssonicbld
Copy link
Collaborator

/AzurePipelines run Azure.sonic-swss

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@prsunny prsunny requested a review from prgeor May 11, 2022 22:44
@prgeor
Copy link
Contributor

prgeor commented May 11, 2022

@liorghub can you test the following scenario :-
The "admin_status" is the same for APPL_DB's PORT_TABLE and STATE_DB's PORT_TABLE for ports that are part of VLAN, portchannel and that are not part of any VLAN, portchannel.

@liorghub
Copy link
Contributor Author

liorghub commented May 19, 2022

@liorghub can you test the following scenario :- The "admin_status" is the same for APPL_DB's PORT_TABLE and STATE_DB's PORT_TABLE for ports that are part of VLAN, portchannel and that are not part of any VLAN, portchannel.

@prgeor
I performed the test you asked for, the results are as expected.
port not part of vlan and not part of port-channel:

redis-cli -n 0 hgetall "PORT_TABLE:Ethernet96"
 1) "oper_status"
 2) "up"

redis-cli -n 6 hgetall "PORT_TABLE|Ethernet96"
 3) "netdev_oper_status"
 4) "up"

port part of port-channel:

redis-cli -n 0 hgetall "PORT_TABLE:Ethernet112"
 1) "oper_status"
 2) "up"

redis-cli -n 6 hgetall "PORT_TABLE|Ethernet112"
 3) "netdev_oper_status"
 4) "up"

port part of vlan:

redis-cli -n 0 hgetall "PORT_TABLE:Ethernet92"
 1) "oper_status"
 2) "up"

redis-cli -n 6 hgetall "PORT_TABLE|Ethernet92"
 3) "netdev_oper_status"
 4) "down"

For port which is part of vlan, indeed there is inconsistency between databases.
Before my changes, the inconsistency occurred for ports which are part of lag as well.
It looks like we should completely remove the ignore operation we have here:
https://github.com/Azure/sonic-swss/blob/2ea8581da4ba6f97bebde4845a234d7c810e5515/portsyncd/linksync.cpp#L215
Once we will remove it completely (I removed it for lag members), inconsistency will be solved.
@zhenggen-xu Can you please approve?

@liorghub
Copy link
Contributor Author

@prgeor @zhenggen-xu
Guys, can we meet via teams and discuss?
I have all the details and we can close it real fast if we will talk.

@prgeor
Copy link
Contributor

prgeor commented May 19, 2022

@prgeor @zhenggen-xu Guys, can we meet via teams and discuss? I have all the details and we can close it real fast if we will talk.

@liorghub OK

@zhenggen-xu
Copy link
Collaborator

@liorghub thought about this a little more, I think the right fix should be changing:

if (master)
{
    return;
}

to:

if (master && nlmsg_type == RTM_DELLINK)
{
    return;
}

what we were really trying to avoid before was when the PORT was removed from bridge, we didn't want to remove the port itself. I think this should be applicable to LAG too (in case port was removed from LAG), thus above changes. This should also fix the inconsistency of the the link status across the tables as you mentioned above. My email: zxu@linkedin.com , we can meet in Teams.

@liorghub
Copy link
Contributor Author

@zhenggen-xu your fix made it work, thanks!
@prgeor I retested the scenarios you mentioned and also the scenario of removing port from vlan and from lag. Everything is working as expected there is no inconsistency between databases anymore.

zhenggen-xu
zhenggen-xu previously approved these changes May 22, 2022
@@ -215,7 +215,7 @@ void LinkSync::onMsg(int nlmsg_type, struct nl_object *obj)
/* If netlink for this port has master, we ignore that for now
* This could be the case where the port was removed from VLAN bridge
*/
if (master)
if (master && nlmsg_type == RTM_DELLINK)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let change the comments session above to state that we ignore the DELLINK message if port has master, this is applicable to the case where port was part or VLAN or LAG etc. You should rename the PR title too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@liorghub liorghub changed the title Do not ignore netlink messages on interfaces belong to LAG Ignore only DELLINK message if port has master May 23, 2022
@dprital
Copy link
Collaborator

dprital commented May 24, 2022

@zhenggen-xu , @prgeor - Now that all the comments were addressed, can you please approve this PR ?

@liorghub liorghub changed the title Ignore only DELLINK message if port has master Update netlink messages handler May 24, 2022
@liat-grozovik liat-grozovik merged commit 7fc0f73 into sonic-net:master May 25, 2022
judyjoseph pushed a commit that referenced this pull request May 25, 2022
- What I did
Ignore netlink DELLINK messages if port has master, this is applicable to the case where port was part of VLAN bridge or LAG.

- Why I did it
Netlink messages handler in portsyncd was ignoring all messages that had master.
Therefore we ignored messages on interfaces that belong to LAG (not only interfaces belong to bridge as intended).
The result was "netdev_oper_status" down in PORT_TABLE in state DB for port which is part of LAG although it is actually up.

- How I verified it
Check "netdev_oper_status" in PORT_TABLE in state DB for port which is part of LAG.
preetham-singh pushed a commit to preetham-singh/sonic-swss that referenced this pull request Aug 6, 2022
- What I did
Ignore netlink DELLINK messages if port has master, this is applicable to the case where port was part of VLAN bridge or LAG.

- Why I did it
Netlink messages handler in portsyncd was ignoring all messages that had master.
Therefore we ignored messages on interfaces that belong to LAG (not only interfaces belong to bridge as intended).
The result was "netdev_oper_status" down in PORT_TABLE in state DB for port which is part of LAG although it is actually up.

- How I verified it
Check "netdev_oper_status" in PORT_TABLE in state DB for port which is part of LAG.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants