Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't ping from VLAN interface which has LAG member #2298

Closed
pawmas opened this issue Nov 25, 2018 · 2 comments
Closed

Can't ping from VLAN interface which has LAG member #2298

pawmas opened this issue Nov 25, 2018 · 2 comments

Comments

@pawmas
Copy link

pawmas commented Nov 25, 2018

Description
ping command doesn't work when I try to run it for IP address from VLAN interface subnet. VLAN interface has one member which is LAG interface. In turn LAG consist of two physical interfaces.

Steps to reproduce the issue:

  1. Create LAG interface named lag1
config portchannel add lag1
  1. Assign physical ports (Update README.md #1 and Update README.md #2) to the LAG interface
ip link set Ethernet0 down
ip link set Ethernet4 down
config portchannel member add lag1 Ethernet0
config portchannel member add lag1 Ethernet4
  1. Remove existed IP addresses from physical ports
ip addr flush dev Ethernet0
ip addr flush dev Ethernet4
  1. Create VLANs and assign LAG interface to the both of them
config vlan add 610
config vlan member add 610 lag1
config vlan add 611
config vlan member add 611 lag1
  1. Assign IP addresses to the VLAN interfaces
config interface Vlan610 ip add 172.25.164.196/31
config interface Vlan610 ip add 2001:4878:c225:0000:0000:0003:0000:0001/96
config interface Vlan611 ip add 172.25.164.198/31
config interface Vlan610 ip add 2001:4878:c225:0000:0000:0004:0000:0001/96
  1. Link up all interfaces
config interface Vlan610 startup
config interface Vlan611 startup
config interface lag1 startup
config interface Ethernet0 startup
config interface Ethernet4 startup
  1. Save running config into startup
config save
  1. Reboot SONiC
reboot
  1. On the remote device repeat steps 1-8 except step 5 where you should allocate different IP addresses but corresponding to proper subnet of VLAN interfaces
config interface Vlan610 ip add 172.25.164.197/31
config interface Vlan610 ip add 2001:4878:c225:0000:0000:0003:0000:0002/96
config interface Vlan611 ip add 172.25.164.199/31
config interface Vlan610 ip add 2001:4878:c225:0000:0000:0004:0000:0002/96

Describe the results you received:
I running ping command to get ARP reply from remote device , but without success:

root@sonic:~# ping 172.25.164.197
PING 172.25.164.197 (172.25.164.197) 56(84) bytes of data.
From 172.25.164.196 icmp_seq=1 Destination Host Unreachable
From 172.25.164.196 icmp_seq=2 Destination Host Unreachable
From 172.25.164.196 icmp_seq=3 Destination Host Unreachable
...

And I can see only ARP request when I want capture any traffic from Vlan610 interface on local machine:

root@sonic:~# tcpdump -i Vlan610 -en
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on Vlan610, link-type EN10MB (Ethernet), capture size 262144 bytes
22:57:51.748889 3c:2c:99:02:9b:2f > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 172.25.164.197 tell 172.25.164.196, length 28
22:57:52.771108 3c:2c:99:02:9b:2f > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 172.25.164.197 tell 172.25.164.196, length 28
22:57:53.795004 3c:2c:99:02:9b:2f > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 172.25.164.197 tell 172.25.164.196, length 28
...

ARP cache is still incomplete:

root@sonic:~# arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
172.25.164.197                   (incomplete)                              Vlan610

Describe the results you expected:
I expect to see any traffic on Vlan610 interface especially I would like to have resolved ARP table for IP from remote device. I also expect to capture ARP request/reply traffic during tcpdump session when I run ping command.

Additional information:
I can confirm that I cannot see any traffic from Vlan610 interface on remote device.

Route table dump:

root@sonic:~# ip route
172.25.164.196/31 dev Vlan610 proto kernel scope link src 172.25.164.196 
172.25.164.198/31 dev Vlan611 proto kernel scope link src 172.25.164.198 

Link state of interfaces:

root@sonic:~# ip addr
8: lag1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:2c:99:02:9b:2f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3e2c:99ff:fe02:9b2f/64 scope link 
       valid_lft forever preferred_lft forever
10: Vlan610@Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:2c:99:02:9b:2f brd ff:ff:ff:ff:ff:ff
    inet 172.25.164.196/31 scope global Vlan610
       valid_lft forever preferred_lft forever
    inet6 2001:4878:c225::3:0:1/96 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::3e2c:99ff:fe02:9b2f/64 scope link 
       valid_lft forever preferred_lft forever
11: Vlan611@Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:2c:99:02:9b:2f brd ff:ff:ff:ff:ff:ff
    inet 172.25.164.198/31 scope global Vlan611
       valid_lft forever preferred_lft forever
    inet6 2001:4878:c225::4:0:1/96 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::3e2c:99ff:fe02:9b2f/64 scope link 
       valid_lft forever preferred_lft forever
28: Ethernet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc pfifo_fast master lag1 state UP group default qlen 1000
    link/ether 3c:2c:99:02:9b:2f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3e2c:99ff:fe02:9b2f/64 scope link 
       valid_lft forever preferred_lft forever
30: Ethernet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc pfifo_fast master lag1 state UP group default qlen 1000
    link/ether 3c:2c:99:02:9b:2f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3e2c:99ff:fe02:9b2f/64 scope link 
       valid_lft forever preferred_lft forever

Here is running configuration dump from /etc/sonic/config_db.json:

root@sonic:~# cat /etc/sonic/config_db.json 
{
    "VLAN_INTERFACE": {
        "Vlan610|172.25.164.196/31": {}, 
        "Vlan610|2001:4878:c225:0000:0000:0003:0000:0001/96": {}, 
        "Vlan610|2001:4878:c225:0000:0000:0004:0000:0001/96": {}, 
        "Vlan611|172.25.164.198/31": {}
    }, 
    "FLEX_COUNTER_TABLE": {
        "PFCWD": {
            "FLEX_COUNTER_STATUS": "enable"
        }, 
        "PORT": {
            "FLEX_COUNTER_STATUS": "enable"
        }, 
        "QUEUE": {
            "FLEX_COUNTER_STATUS": "enable"
        }
    }, 
    "BGP_NEIGHBOR": {
        "10.0.0.1": {
            "rrclient": "0", 
            "name": "ARISTA01T2", 
            "local_addr": "10.0.0.0", 
            "nhopself": "0", 
            "holdtime": "180", 
            "asn": "65200", 
            "keepalive": "60"
        }, 
        ...
    }, 
    "DEVICE_METADATA": {
        "localhost": {
            "hwsku": "Accton-AS7816-64X", 
            "hostname": "sonic", 
            "platform": "x86_64-accton_as7816_64x-r0", 
            "mac": "3c:2c:99:02:9b:2f", 
            "bgp_asn": "65100", 
            "type": "LeafRouter"
        }
    }, 
    "VLAN": {
        "Vlan610": {
            "members": [
                "lag1"
            ], 
            "vlanid": "610"
        }, 
        "Vlan611": {
            "members": [
                "lag1"
            ], 
            "vlanid": "611"
        }
    }, 
    "PORTCHANNEL": {
        "lag1": {
            "admin_status": "up", 
            "mtu": "9100"
        }
    }, 
    "PORT": {
        "Ethernet0": {
            "alias": "hundredGigE1", 
            "admin_status": "up", 
            "lanes": "73,74,75,76", 
            "mtu": "9100"
        }, 
        "Ethernet4": {
            "alias": "hundredGigE2", 
            "admin_status": "up", 
            "lanes": "65,66,67,68", 
            "mtu": "9100"
        }, 
        "Ethernet8": {
            "alias": "hundredGigE3", 
            "admin_status": "up", 
            "lanes": "81,82,83,84", 
            "mtu": "9100"
        }, 
        ...
    }, 
    "PORTCHANNEL_MEMBER": {
        "lag1|Ethernet0": {}, 
        "lag1|Ethernet4": {}
    }, 
    "INTERFACE": {
        "Ethernet8|10.0.0.4/31": {}, 
        "Ethernet12|10.0.0.6/31": {}, 
        ...
    }, 
    "VLAN_MEMBER": {
        "Vlan610|lag1": {
            "tagging_mode": "tagged"
        }, 
        "Vlan611|lag1": {
            "tagging_mode": "tagged"
        }
    }, 
    "LOOPBACK_INTERFACE": {
        "Loopback0|10.1.0.1/32": {}
    }
}

Output of show version:

ONiC Software Version: SONiC.HEAD.781-465ebba
Distribution: Debian 9.6
Kernel: 4.9.0-7-amd64
Build commit: 465ebba
Build date: Wed Nov 21 13:45:33 UTC 2018
Built by: johnar@jenkins-worker-4

Docker images:
REPOSITORY                 TAG                 IMAGE ID            SIZE
docker-syncd-brcm          HEAD.781-465ebba    3dce8ec8a958        336.2 MB
docker-syncd-brcm          latest              3dce8ec8a958        336.2 MB
docker-orchagent-brcm      HEAD.781-465ebba    f177bffee105        282.6 MB
docker-orchagent-brcm      latest              f177bffee105        282.6 MB
docker-lldp-sv2            HEAD.781-465ebba    1825d980eafc        271 MB
docker-lldp-sv2            latest              1825d980eafc        271 MB
docker-dhcp-relay          HEAD.781-465ebba    037f74d941cd        253.3 MB
docker-dhcp-relay          latest              037f74d941cd        253.3 MB
docker-database            HEAD.781-465ebba    9b9839843f24        251.9 MB
docker-database            latest              9b9839843f24        251.9 MB
docker-teamd               HEAD.781-465ebba    48c80858e9a6        270.9 MB
docker-teamd               latest              48c80858e9a6        270.9 MB
docker-snmp-sv2            HEAD.781-465ebba    72efa289d8a4        290.8 MB
docker-snmp-sv2            latest              72efa289d8a4        290.8 MB
docker-router-advertiser   HEAD.781-465ebba    1cfebd071d71        249.6 MB
docker-router-advertiser   latest              1cfebd071d71        249.6 MB
docker-platform-monitor    HEAD.781-465ebba    b77a2b2d1e2d        283.5 MB
docker-platform-monitor    latest              b77a2b2d1e2d        283.5 MB
docker-fpm-quagga          HEAD.781-465ebba    3d659d2da8e4        277.7 MB
docker-fpm-quagga          latest              3d659d2da8e4        277.7 MB

I can confirm that I experience the same issue on version 201817 .

Here is hardware list on which I tested above configuration:

  • Accton AS7816-64x (Broadcom Tomahawk II - BCM56970)
  • Celestica Seastone DX010 (Broadcom Tomahawk - BCM56960)
@dawnbeauty
Copy link

Could you try 'PortChannel0001' instead of 'lag1'?

@pawmas
Copy link
Author

pawmas commented Dec 26, 2018

Could you try 'PortChannel0001' instead of 'lag1'?

Yeah, that's it! PortChannel01 instead lag1 is correct way.

Thanks @dawnbeauty !

@pawmas pawmas closed this as completed Dec 26, 2018
yxieca added a commit to yxieca/sonic-buildimage that referenced this issue May 27, 2022
* 910bfd4 2022-05-28 | [ACL] Add default action_list for default ACL table type (sonic-net#2298) (github/master) [bingwang-ms]
* 4d6fa42 2022-05-27 | [ci] Change artifact reference pipeline to common lib pipeline. (sonic-net#2294) [Shilong Liu]
* 583236f 2022-05-26 | [P4Orch] Lazy UDF match creation to avoid failure during warm reboot (sonic-net#2282) [Yilan Ji]
* d7b5ff7 2022-05-26 | [ci] Use correct branch when downloading artifact. (sonic-net#2292) [Shilong Liu]
* 2f13244 2022-05-26 | [ci] Improve azp trigger settings to automaticlly support new release branch. (sonic-net#2289) [Shilong Liu]

Signed-off-by: Ying Xie <ying.xie@microsoft.com>
bingwang-ms pushed a commit that referenced this issue May 30, 2022
* 910bfd4 2022-05-28 | [ACL] Add default action_list for default ACL table type (#2298) (github/master) [bingwang-ms]
* 4d6fa42 2022-05-27 | [ci] Change artifact reference pipeline to common lib pipeline. (#2294) [Shilong Liu]
* 583236f 2022-05-26 | [P4Orch] Lazy UDF match creation to avoid failure during warm reboot (#2282) [Yilan Ji]
* d7b5ff7 2022-05-26 | [ci] Use correct branch when downloading artifact. (#2292) [Shilong Liu]
* 2f13244 2022-05-26 | [ci] Improve azp trigger settings to automaticlly support new release branch. (#2289) [Shilong Liu]

Signed-off-by: Ying Xie <ying.xie@microsoft.com>
vivekrnv pushed a commit to vivekrnv/sonic-buildimage that referenced this issue Aug 26, 2022
)

What I did
This PR is derived from sonic-net#2205
Fix sonic-net#10425

We were seeing ACL table creation failure on some platform because action_list is mandatory, while the action_list is not provided by aclorch.

Apr  1 01:24:11.702608 str2-7050cx3-acs-03 ERR swss#orchagent: :- validate: Action list for table DATAACL is mandatory
Apr  1 01:24:11.702608 str2-7050cx3-acs-03 ERR swss#orchagent: :- doAclTableTask: Failed to create ACL table DATAACL, invalid configuration
Apr  1 01:24:11.702741 str2-7050cx3-acs-03 ERR swss#orchagent: :- validate: Action list for table EVERFLOW is mandatory
Apr  1 01:24:11.702741 str2-7050cx3-acs-03 ERR swss#orchagent: :- doAclTableTask: Failed to create ACL table EVERFLOW, invalid configuration
Apr  1 01:24:11.702926 str2-7050cx3-acs-03 ERR swss#orchagent: :- validate: Action list for table EVERFLOWV6 is mandatory
Apr  1 01:24:11.702926 str2-7050cx3-acs-03 ERR swss#orchagent: :- doAclTableTask: Failed to create ACL table EVERFLOWV6, invalid configuration
This PR fixed the issue by adding default action_list to the default ACL table type if not present.

Why I did it
Fix the ACL table creation issue.

How I verified it
Verified by running test_acl and test_everflow on Broadcom TD3 platform

Signed-off-by: bingwang <wang.bing@microsoft.com>
Co-authored-by: syuan <syuan@arista.com>
judyjoseph added a commit to judyjoseph/sonic-buildimage that referenced this issue Oct 27, 2022
7f24448 [202111] cherrypick ACL fixes (sonic-net#2298) and (sonic-net#2351) (sonic-net#2497)
judyjoseph added a commit that referenced this issue Oct 28, 2022
7f24448 [202111] cherrypick ACL fixes (#2298) and (#2351) (#2497)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants