-
Notifications
You must be signed in to change notification settings - Fork 762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added DPU platform test cases #14152
Conversation
The pre-commit check detected issues in the files touched by this pull request. Detailed pre-commit check results: To run the pre-commit checks locally, you can follow below steps:
|
@nissampa Please update the PR description with the HLD link |
# Check will be added for ip address once dynamic implementation is in place | ||
ip_address_list.append(module.get_midplane_ip(platform_api_conn, index)) | ||
|
||
check_dpu_ping_status(duthost, ip_address_list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no failure check here, if ping fails to the DPU hosts. Also, it is good to check for each DPU and log accordingly to debug the failure easily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added failure check to it.
# Checking the state of dpu health again after bringing up interface and waiting for db entry to get populated | ||
dpu_db_after = wait_until(60, 60, 0, count_dpu_modules_in_system_health_cli, duthost) | ||
|
||
pytest_assert(dpu_db_before == count_up == count_down == dpu_db_after, "Link Flap is Tested'{}'".format(duthost.hostname)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is good to have the DPU information whose link didn't come up after the test. Also, can we have a debug log in successful case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it.
duthosts.shell("config chassis modules startup %s"%dpu) | ||
time.sleep(2) | ||
|
||
pytest_assert(wait_until(120, 60, 0, check_dpu_module_status, duthost, num_modules, "on"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here, it is good to log the DPUs that failed to come up. Something like to add a new utils function to get the status and log the information in case of failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added the loggin information in the util itself.
time.sleep(2) | ||
output_startup_dpu = duthost.shell("config chassis module startup %s"%(dpus_name))["stdout_lines"] | ||
|
||
pytest_assert(wait_until(120, 60, 0, check_dpu_reboot_cause, duthost, num_modules), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here. please log the DPU status information for easy debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it.
|
||
def test_dpu_ping_after_reboot(duthosts, enum_rand_one_per_hwsku_hostname, localhost, platform_api_conn): | ||
""" | ||
@summary: Verify output of `config chassis modules startup <DPU_Number>` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you elaborate more on function summary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
from tests.common.devices.sonic import * #noqa | ||
from tests.common.helpers.assertions import pytest_assert | ||
|
||
@pytest.fixture(scope='function', autouse=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the fixture is autouse? It's better to not put an autouse fixture in a common util file. Or you can remove the autouse so this fixture can be used more flexibly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And the scope is function, is it necessary to power on the DPUs before each test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idea is to run all the cases with having all the dpus turned on.
for index in range(num_modules): | ||
dpu_name = module.get_name(platform_api_conn, index) | ||
output_shutdown_dpu = duthost.shell("config chassis module shutdown %s"%(dpu_name))["stdout_lines"] | ||
time.sleep(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will this sleep being enough for waiting for the shutdown? should we use other more stable signal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to wait until DPUs turns off.
from tests.common.helpers.assertions import pytest_assert | ||
|
||
@pytest.fixture(scope='function', autouse=True) | ||
def dpu_poweron_fixture(duthosts, enum_rand_one_per_hwsku_hostname, request, platform_api_conn): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually we don't have the "fixture" in a fixture name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
for i in range(len(output_ping)): | ||
if "0% packet loss" in output_ping[i]: | ||
ping_count += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can take the full output from "stdout", no need to search in each line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
if "0% packet loss" in output_ping[i]: | ||
ping_count += 1 | ||
|
||
return (ping_count == len(ip_address_list)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the function can return with False when any of the ping fails, no need to try the remaining ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And the outer parentheses are not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed outer parentheses
parse_output = output_dpu_status[index] | ||
if parse_output['oper-status'] == 'Offline': | ||
dpu_off_count += 1 | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here should be a condition for the on status in case there is unexpected status output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cli output has only three options: offline, online or partial online.
elif power_status == "off": | ||
dpu_status_count = dpu_off_count | ||
|
||
return (dpu_status_count == num_modules) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parentheses are not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
return (ping_count == len(ip_address_list)) | ||
|
||
|
||
def check_dpu_module_status(duthost, num_modules, power_status): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to make this function also able to check specific DPU/DUPs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
return (dpu_status_count == num_modules) | ||
|
||
|
||
def check_dpu_reboot_cause(duthost, num_modules): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to make this function also able to check specific DPU/DUPs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
if 'DPU' in parse_output['device'] and parse_output['cause'] == 'Unknown': | ||
len_show_cmd += 1 | ||
|
||
return (num_modules == len_show_cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parentheses are not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
interface = parse_output['interface'] | ||
|
||
if interface != "eth0" and 'eth' in interface: | ||
output_intf_down = duthost.shell("ifconfig %s down"%(interface))["stdout_lines"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like output_intf_down is not used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
|
||
# Deleting the DB table entry after bringing down the interface | ||
duthost.shell( | ||
"for key in `redis-cli -p 6380 -h 127.0.0.1 -n 13 --scan --pattern \"DPU_STATE|DPU*\" `; " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the port is specified here? Is it static?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is static port.
"do redis-cli -p 6380 -h 127.0.0.1 -n 13 del $key ; done") | ||
|
||
# Checking the state of dpu health again after bringing down interface and deleting db entry | ||
dpu_db = wait_until(60, 60, 0, count_dpu_modules_in_system_health_cli, duthost) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like here the expectation is the failure of the wait_until, which actually equals to a 60s sleep, isn't it? And if in the first check the return value of count_dpu_modules_in_system_health_cli is not 0, the wait_until will pass and the following pytest_assert will fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like here the expectation is the failure of the wait_until, which actually equals to a 60s sleep, isn't it? Yes
And if in the first check the return value of count_dpu_modules_in_system_health_cli is not 0, the wait_until will pass and the following pytest_assert will fail? Yes
if status == "down/down": | ||
count_down += 1 | ||
output_intf_up = duthost.shell("ifconfig %s up"%(interface))["stdout_lines"] | ||
time.sleep(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is 1 second enough for the port to up? Why not combine this sleep with the wait_until check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We dont need to wait actually, the interface face would be up within a second.
count_up += 1 | ||
|
||
# Checking the state of dpu health again after bringing up interface and waiting for db entry to get populated | ||
dpu_db_after = wait_until(60, 60, 0, count_dpu_modules_in_system_health_cli, duthost) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the upper delay is not enough, here could be false negative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can increase the delay, but db reflects the changes within a minute, that is why kept at 60 secs.
|
||
logging.info("Verifying output of '{}' on '{}'...".format(CMD_PCIE_INFO, duthost.hostname)) | ||
output_pcie_info = duthost.command(CMD_PCIE_INFO)["stdout_lines"] | ||
pytest_assert("PASSED" == output_pcie_info[-1], "PCIe Link is good'{}'".format(duthost.hostname)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't here should be a "in" instead of "=="? The last line of the output is like this:
"PCIe Device Checking All Test ----------->>> PASSED"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
output_pcie_info = duthost.command(CMD_PCIE_INFO)["stdout_lines"] | ||
pytest_assert("PASSED" == output_pcie_info[-1], "PCIe Link is good'{}'".format(duthost.hostname)) | ||
|
||
num_modules = int(chassis.get_num_modules(platform_api_conn)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is used every, why not make it a fixture?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made it into a fixture.
|
||
pytest_assert(wait_until(120, 60, 0, check_dpu_module_status, duthost, num_modules, "off"), | ||
"Not all DPUs operationally down") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should here be a check for the pcie after the DPUs are shutdown?
This check is in the testplan:
1.7 Check removal of pcie link between NPU and DPU
Steps
- Use
show platform pcieinfo -c
to run the pcie info test to check everything is passing - Use command
config chassis modules shutdown DPU<DPU_NUM>
to bring down the dpu (This will bring down the pcie link between npu and dpu) - Use
show platform pcieinfo -c
to run the pcie info test to check pcie link has been removed - Use command
config chassis modules startup DPU<DPU_NUM>
to bring up the dpu (This will rescan pcie links) - Use
show platform pcieinfo -c
to run the pcie info test to check everything is passing - This test is to check the PCie hot plug functinality since there is no OIR possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the check according to test plan.
reboot(duthost, localhost, reboot_type=REBOOT_TYPE_COLD, wait_for_ssh=False) | ||
wait_for_startup(duthost, localhost, 10, 300) | ||
pytest_assert(wait_until(300, 5, 0, check_interface_status_of_up_ports, duthost), | ||
"Not all ports that are admin up on are operationally up") | ||
logging.info("Interfaces are up") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not directly use a safe_reboot?
sonic-mgmt/tests/common/reboot.py
Line 225 in 7b5156e
safe_reboot=False, check_intf_up_ports=False): |
|
||
for index in range(len(output_dpu_status)): | ||
parse_output = output_dpu_status[index] | ||
if parse_output['oper-status'] != 'Offline': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why some of the DPUs are expected to be in “Offline” status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it. It was added before creating the power on fixture, hence the confusion.
for index in range(num_modules): | ||
dpu = module.get_name(platform_api_conn, index) | ||
ip_address_list.append(module.get_midplane_ip(platform_api_conn, index)) | ||
duthosts.shell("config chassis modules startup %s"%dpu) | ||
time.sleep(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why need to start the DPUs before reload? You already have dpu_poweron_fixture imported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
logging.info("Reload configuration") | ||
duthost.shell("sudo config reload -y &>/dev/null", executable="/bin/bash") | ||
|
||
logging.info("Wait until all critical services are fully started") | ||
wait_critical_processes(duthost) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not directly use the safe config reload?
sonic-mgmt/tests/common/config_reload.py
Line 110 in 7b5156e
safe_reload=False, wait_before_force_reload=0, wait_for_bgp=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even in the above case safe reload is set to false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix these comments. I am in process of reviewing other 2 files also.
|
||
@pytest.fixture(scope='function') | ||
def num_dpu_modules(platform_api_conn): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new line is not neccessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it
parse_version(duthost.os_version) <= parse_version("202405"): | ||
pytest.skip("Test is not supported for this testbed and os version") | ||
|
||
darkmode = is_dark_mode_enabled(duthost, platform_api_conn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As commented earlier, either rename function name or move these dpu check for darkmode to a new function for providing more clarity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the function name.
|
||
|
||
def is_dark_mode_enabled(duthost, platform_api_conn): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new line is not neccessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed it.
output_config_db = duthost.command( | ||
'redis-cli -p 6379 -h 127.0.0.1 \ | ||
-n 4 hgetall "CHASSIS_MODULE|{}"'.format(dpu)) | ||
if 'down' in output_config_db['stdout']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is good to add a check, if the output_config_db is empty for some failure or a reason. Also add a warning log if the output is empty.
Returns the number of DPU modules | ||
""" | ||
|
||
num_modules = int(chassis.get_num_modules(platform_api_conn)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a debug log to log num_modules for easy troubleshooting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it.
-n 4 hgetall "CHASSIS_MODULE|{}"'.format(dpu)) | ||
if 'down' in output_config_db['stdout']: | ||
count_admin_down += 1 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add debug log here to print dark mode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it.
if count_admin_down == num_modules: | ||
return True | ||
|
||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will happen in case of only few DPUs are in power-on state. Does the tests proceed only on enabled DPUs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the test proceeds with enabled dpus.
duthost.shell("config chassis modules startup %s" % (dpu)) | ||
|
||
pytest_assert(wait_until(180, 60, 0, check_dpu_ping_status, | ||
duthost, ip_address_list), "Not all DPUs operationally up") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: Use "Not all DPUs are operationally up"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it.
|
||
ping_count = 0 | ||
for ip_address in ip_address_list: | ||
output_ping = duthost.command("ping -c 3 %s" % (ip_address)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add debug logs for all ping outputs for easy troubleshooting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remaining PR LGTM. Thanks!
ip_address_list = [] | ||
num_modules = num_dpu_modules(platform_api_conn) | ||
|
||
reboot(duthost, localhost, reboot_type=REBOOT_TYPE_COLD, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you rebooting entire switch here? If yes, could you add a debug/info log?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I added the logs as well.
""" | ||
|
||
num_modules = int(chassis.get_num_modules(platform_api_conn)) | ||
logging.info("Num of moduels: '{}'".format(num_modules)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: spelling mistake
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed it.
'redis-cli -p 6379 -h 127.0.0.1 \ | ||
-n 4 hgetall "CHASSIS_MODULE|{}"'.format(dpu)) | ||
if output_config_db['stdout'] is None: | ||
logging.warn("redis cli output for chassis module state is empty") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If output_config_db is empty, you need to return False here right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, returning false here. Changed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of minor suggestions, else LGTM. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you addressing the comments!
/azpw run |
1 similar comment
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
/AzurePipelines run |
Commenter does not have sufficient privileges for PR 14152 in repo sonic-net/sonic-mgmt |
/azpw run |
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
* Added DPU platform test cases * modified imports for flake8 test * Changed dirrectory structure for smartswitch test scripts * adderessed few comments * moved tests scripts to platform_tests dir under smartswitch * addressed few set of comments in PR * added logging info * changed ping check * addressed set of comments on PR * flake8 errors resolved for smartswitch/common directory * flake8 error resolved for platform_tests/test_reload_dpu.py * flake8 error resolved for platform_tests/test_show_platform_dpu.py * flake8 error resolved for platform_tests/test_show_platform_dpu.py * flake8 error resolved for platform_tests/test_show_platform_dpu.py * added test skip fixture for smartswitch and non-dark mode * utils flak8 fix * added platform file as variable * missed the \ in added code for platform file * added smartswtich and darkmode detection and lighting up dpus * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * resolved flake8 errors * added comments for util functions * Made clear with utility function descriptions * retified the error * num of dpu modules fixtures * resolved flake8 errors * resolved flake8 errors * changed link flap case to accomodate bridge interface * removed link flap case * resolved flake8 error * eof fixed * resolving pre-commit check * Addressed set of comments on PR * minor mistakes * added extra line to PR checker * minor change for PR checker * removed functions parameter which are not fixtures
* Added DPU platform test cases * modified imports for flake8 test * Changed dirrectory structure for smartswitch test scripts * adderessed few comments * moved tests scripts to platform_tests dir under smartswitch * addressed few set of comments in PR * added logging info * changed ping check * addressed set of comments on PR * flake8 errors resolved for smartswitch/common directory * flake8 error resolved for platform_tests/test_reload_dpu.py * flake8 error resolved for platform_tests/test_show_platform_dpu.py * flake8 error resolved for platform_tests/test_show_platform_dpu.py * flake8 error resolved for platform_tests/test_show_platform_dpu.py * added test skip fixture for smartswitch and non-dark mode * utils flak8 fix * added platform file as variable * missed the \ in added code for platform file * added smartswtich and darkmode detection and lighting up dpus * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * rectified the syntax error * resolved flake8 errors * added comments for util functions * Made clear with utility function descriptions * retified the error * num of dpu modules fixtures * resolved flake8 errors * resolved flake8 errors * changed link flap case to accomodate bridge interface * removed link flap case * resolved flake8 error * eof fixed * resolving pre-commit check * Addressed set of comments on PR * minor mistakes * added extra line to PR checker * minor change for PR checker * removed functions parameter which are not fixtures
Test Plan PR Link: #12701
Add new tests:
Under test_show_platform_dpu.py
Under test_reload_dpu.py: