Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added DPU platform test cases #14152

Merged
merged 51 commits into from
Oct 17, 2024
Merged

Conversation

nissampa
Copy link
Contributor

@nissampa nissampa commented Aug 16, 2024

  • Added test_show_platform_dpu.py file containing tests for the platform cli ... commands for dpu under tests/platform_tests/cli/
  • Added test_reload_dpu.py file containing tests for the reboot and reload ... commands for dpu under tests/platform_tests/
  • Added device_utils_dpu.py containing utility functions for dpu platform test cases under common/platform

Test Plan PR Link: #12701

Add new tests:

Under test_show_platform_dpu.py

test_midplane_ip    -    [1.5 Check midplane ip address between NPU and DPU]
test_shutdown_power_up_dpu     -     [1.6 Check DPU shutdown and power up individually]
test_reboot_cause     -    [1.11 Check reboot cause history]
test_pcie_link     -    [1.7 Check removal of pcie link between NPU and DPU]

Under test_reload_dpu.py:

test_dpu_ping_after_reboot    -    [1.12 Check the DPU state after OS reboot]
test_show_ping_int_after_reload     -    [Phase 2 Test plan document]

@mssonicbld
Copy link
Collaborator

The pre-commit check detected issues in the files touched by this pull request.
The pre-commit check is a mandatory check, please fix detected issues.

Detailed pre-commit check results:
trim trailing whitespace.................................................Failed
- hook id: trailing-whitespace
- exit code: 1
- files were modified by this hook

Fixing tests/platform_tests/cli/test_show_platform_dpu.py
Fixing tests/platform_tests/test_reload_dpu.py

fix end of files.........................................................Passed
check yaml...........................................(no files to check)Skipped
check for added large files..............................................Passed
check python ast.........................................................Passed
flake8...................................................................Failed
- hook id: flake8
- exit code: 1

tests/common/platform/device_utils_dpu.py:9:1: F403 'from tests.platform_tests.api.conftest import ' used; unable to detect undefined names
tests/common/platform/device_utils_dpu.py:9:1: F401 'tests.platform_tests.api.conftest.
' imported but unused
tests/common/platform/device_utils_dpu.py:10:1: F403 'from tests.common.devices.sonic import ' used; unable to detect undefined names
tests/common/platform/device_utils_dpu.py:10:1: F401 'tests.common.devices.sonic.
' imported but unused
...
[truncated extra lines, please run pre-commit locally to view full check results]

To run the pre-commit checks locally, you can follow below steps:

  1. Ensure that default python is python3. In sonic-mgmt docker container, default python is python2. You can run
    the check by activating the python3 virtual environment in sonic-mgmt docker container or outside of sonic-mgmt
    docker container.
  2. Ensure that the pre-commit package is installed:
sudo pip install pre-commit
  1. Go to repository root folder
  2. Install the pre-commit hooks:
pre-commit install
  1. Use pre-commit to check staged file:
pre-commit
  1. Alternatively, you can check committed files using:
pre-commit run --from-ref <commit_id> --to-ref <commit_id>

@nissampa nissampa marked this pull request as ready for review August 20, 2024 16:53
@nissampa nissampa requested a review from prgeor as a code owner August 20, 2024 16:53
@prgeor
Copy link
Contributor

prgeor commented Aug 26, 2024

@nissampa Please update the PR description with the HLD link

# Check will be added for ip address once dynamic implementation is in place
ip_address_list.append(module.get_midplane_ip(platform_api_conn, index))

check_dpu_ping_status(duthost, ip_address_list)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no failure check here, if ping fails to the DPU hosts. Also, it is good to check for each DPU and log accordingly to debug the failure easily.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added failure check to it.

# Checking the state of dpu health again after bringing up interface and waiting for db entry to get populated
dpu_db_after = wait_until(60, 60, 0, count_dpu_modules_in_system_health_cli, duthost)

pytest_assert(dpu_db_before == count_up == count_down == dpu_db_after, "Link Flap is Tested'{}'".format(duthost.hostname))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is good to have the DPU information whose link didn't come up after the test. Also, can we have a debug log in successful case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added it.

duthosts.shell("config chassis modules startup %s"%dpu)
time.sleep(2)

pytest_assert(wait_until(120, 60, 0, check_dpu_module_status, duthost, num_modules, "on"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, it is good to log the DPUs that failed to come up. Something like to add a new utils function to get the status and log the information in case of failure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added the loggin information in the util itself.

time.sleep(2)
output_startup_dpu = duthost.shell("config chassis module startup %s"%(dpus_name))["stdout_lines"]

pytest_assert(wait_until(120, 60, 0, check_dpu_reboot_cause, duthost, num_modules),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. please log the DPU status information for easy debugging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added it.


def test_dpu_ping_after_reboot(duthosts, enum_rand_one_per_hwsku_hostname, localhost, platform_api_conn):
"""
@summary: Verify output of `config chassis modules startup <DPU_Number>`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you elaborate more on function summary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

from tests.common.devices.sonic import * #noqa
from tests.common.helpers.assertions import pytest_assert

@pytest.fixture(scope='function', autouse=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the fixture is autouse? It's better to not put an autouse fixture in a common util file. Or you can remove the autouse so this fixture can be used more flexibly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the scope is function, is it necessary to power on the DPUs before each test case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idea is to run all the cases with having all the dpus turned on.

for index in range(num_modules):
dpu_name = module.get_name(platform_api_conn, index)
output_shutdown_dpu = duthost.shell("config chassis module shutdown %s"%(dpu_name))["stdout_lines"]
time.sleep(2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this sleep being enough for waiting for the shutdown? should we use other more stable signal?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to wait until DPUs turns off.

from tests.common.helpers.assertions import pytest_assert

@pytest.fixture(scope='function', autouse=True)
def dpu_poweron_fixture(duthosts, enum_rand_one_per_hwsku_hostname, request, platform_api_conn):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually we don't have the "fixture" in a fixture name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

Comment on lines 47 to 49
for i in range(len(output_ping)):
if "0% packet loss" in output_ping[i]:
ping_count += 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can take the full output from "stdout", no need to search in each line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

if "0% packet loss" in output_ping[i]:
ping_count += 1

return (ping_count == len(ip_address_list))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the function can return with False when any of the ping fails, no need to try the remaining ones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the outer parentheses are not necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed outer parentheses

parse_output = output_dpu_status[index]
if parse_output['oper-status'] == 'Offline':
dpu_off_count += 1
else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here should be a condition for the on status in case there is unexpected status output.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cli output has only three options: offline, online or partial online.

elif power_status == "off":
dpu_status_count = dpu_off_count

return (dpu_status_count == num_modules)
Copy link
Contributor

@congh-nvidia congh-nvidia Aug 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parentheses are not necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

return (ping_count == len(ip_address_list))


def check_dpu_module_status(duthost, num_modules, power_status):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to make this function also able to check specific DPU/DUPs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

return (dpu_status_count == num_modules)


def check_dpu_reboot_cause(duthost, num_modules):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to make this function also able to check specific DPU/DUPs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

if 'DPU' in parse_output['device'] and parse_output['cause'] == 'Unknown':
len_show_cmd += 1

return (num_modules == len_show_cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parentheses are not necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

interface = parse_output['interface']

if interface != "eth0" and 'eth' in interface:
output_intf_down = duthost.shell("ifconfig %s down"%(interface))["stdout_lines"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like output_intf_down is not used

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.


# Deleting the DB table entry after bringing down the interface
duthost.shell(
"for key in `redis-cli -p 6380 -h 127.0.0.1 -n 13 --scan --pattern \"DPU_STATE|DPU*\" `; "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the port is specified here? Is it static?

Copy link
Contributor Author

@nissampa nissampa Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is static port.

"do redis-cli -p 6380 -h 127.0.0.1 -n 13 del $key ; done")

# Checking the state of dpu health again after bringing down interface and deleting db entry
dpu_db = wait_until(60, 60, 0, count_dpu_modules_in_system_health_cli, duthost)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like here the expectation is the failure of the wait_until, which actually equals to a 60s sleep, isn't it? And if in the first check the return value of count_dpu_modules_in_system_health_cli is not 0, the wait_until will pass and the following pytest_assert will fail?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like here the expectation is the failure of the wait_until, which actually equals to a 60s sleep, isn't it? Yes

And if in the first check the return value of count_dpu_modules_in_system_health_cli is not 0, the wait_until will pass and the following pytest_assert will fail? Yes

if status == "down/down":
count_down += 1
output_intf_up = duthost.shell("ifconfig %s up"%(interface))["stdout_lines"]
time.sleep(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 1 second enough for the port to up? Why not combine this sleep with the wait_until check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We dont need to wait actually, the interface face would be up within a second.

count_up += 1

# Checking the state of dpu health again after bringing up interface and waiting for db entry to get populated
dpu_db_after = wait_until(60, 60, 0, count_dpu_modules_in_system_health_cli, duthost)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the upper delay is not enough, here could be false negative.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can increase the delay, but db reflects the changes within a minute, that is why kept at 60 secs.


logging.info("Verifying output of '{}' on '{}'...".format(CMD_PCIE_INFO, duthost.hostname))
output_pcie_info = duthost.command(CMD_PCIE_INFO)["stdout_lines"]
pytest_assert("PASSED" == output_pcie_info[-1], "PCIe Link is good'{}'".format(duthost.hostname))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't here should be a "in" instead of "=="? The last line of the output is like this:
"PCIe Device Checking All Test ----------->>> PASSED"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

output_pcie_info = duthost.command(CMD_PCIE_INFO)["stdout_lines"]
pytest_assert("PASSED" == output_pcie_info[-1], "PCIe Link is good'{}'".format(duthost.hostname))

num_modules = int(chassis.get_num_modules(platform_api_conn))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is used every, why not make it a fixture?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made it into a fixture.


pytest_assert(wait_until(120, 60, 0, check_dpu_module_status, duthost, num_modules, "off"),
"Not all DPUs operationally down")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should here be a check for the pcie after the DPUs are shutdown?

This check is in the testplan:

1.7 Check removal of pcie link between NPU and DPU

Steps

  • Use show platform pcieinfo -c to run the pcie info test to check everything is passing
  • Use command config chassis modules shutdown DPU<DPU_NUM> to bring down the dpu (This will bring down the pcie link between npu and dpu)
  • Use show platform pcieinfo -c to run the pcie info test to check pcie link has been removed
  • Use command config chassis modules startup DPU<DPU_NUM> to bring up the dpu (This will rescan pcie links)
  • Use show platform pcieinfo -c to run the pcie info test to check everything is passing
  • This test is to check the PCie hot plug functinality since there is no OIR possible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the check according to test plan.

Comment on lines 39 to 43
reboot(duthost, localhost, reboot_type=REBOOT_TYPE_COLD, wait_for_ssh=False)
wait_for_startup(duthost, localhost, 10, 300)
pytest_assert(wait_until(300, 5, 0, check_interface_status_of_up_ports, duthost),
"Not all ports that are admin up on are operationally up")
logging.info("Interfaces are up")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not directly use a safe_reboot?

safe_reboot=False, check_intf_up_ports=False):


for index in range(len(output_dpu_status)):
parse_output = output_dpu_status[index]
if parse_output['oper-status'] != 'Offline':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why some of the DPUs are expected to be in “Offline” status?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it. It was added before creating the power on fixture, hence the confusion.

Comment on lines 63 to 67
for index in range(num_modules):
dpu = module.get_name(platform_api_conn, index)
ip_address_list.append(module.get_midplane_ip(platform_api_conn, index))
duthosts.shell("config chassis modules startup %s"%dpu)
time.sleep(2)
Copy link
Contributor

@congh-nvidia congh-nvidia Aug 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why need to start the DPUs before reload? You already have dpu_poweron_fixture imported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.

Comment on lines 72 to 76
logging.info("Reload configuration")
duthost.shell("sudo config reload -y &>/dev/null", executable="/bin/bash")

logging.info("Wait until all critical services are fully started")
wait_critical_processes(duthost)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not directly use the safe config reload?

safe_reload=False, wait_before_force_reload=0, wait_for_bgp=False,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even in the above case safe reload is set to false

Copy link
Contributor

@vvolam vvolam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix these comments. I am in process of reviewing other 2 files also.


@pytest.fixture(scope='function')
def num_dpu_modules(platform_api_conn):

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new line is not neccessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it

parse_version(duthost.os_version) <= parse_version("202405"):
pytest.skip("Test is not supported for this testbed and os version")

darkmode = is_dark_mode_enabled(duthost, platform_api_conn)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As commented earlier, either rename function name or move these dpu check for darkmode to a new function for providing more clarity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the function name.



def is_dark_mode_enabled(duthost, platform_api_conn):

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new line is not neccessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed it.

output_config_db = duthost.command(
'redis-cli -p 6379 -h 127.0.0.1 \
-n 4 hgetall "CHASSIS_MODULE|{}"'.format(dpu))
if 'down' in output_config_db['stdout']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is good to add a check, if the output_config_db is empty for some failure or a reason. Also add a warning log if the output is empty.

Returns the number of DPU modules
"""

num_modules = int(chassis.get_num_modules(platform_api_conn))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a debug log to log num_modules for easy troubleshooting.

Copy link
Contributor Author

@nissampa nissampa Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added it.

-n 4 hgetall "CHASSIS_MODULE|{}"'.format(dpu))
if 'down' in output_config_db['stdout']:
count_admin_down += 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add debug log here to print dark mode

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added it.

if count_admin_down == num_modules:
return True

return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will happen in case of only few DPUs are in power-on state. Does the tests proceed only on enabled DPUs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the test proceeds with enabled dpus.

duthost.shell("config chassis modules startup %s" % (dpu))

pytest_assert(wait_until(180, 60, 0, check_dpu_ping_status,
duthost, ip_address_list), "Not all DPUs operationally up")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: Use "Not all DPUs are operationally up"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it.


ping_count = 0
for ip_address in ip_address_list:
output_ping = duthost.command("ping -c 3 %s" % (ip_address))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add debug logs for all ping outputs for easy troubleshooting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added it.

Copy link
Contributor

@vvolam vvolam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remaining PR LGTM. Thanks!

ip_address_list = []
num_modules = num_dpu_modules(platform_api_conn)

reboot(duthost, localhost, reboot_type=REBOOT_TYPE_COLD,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you rebooting entire switch here? If yes, could you add a debug/info log?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I added the logs as well.

"""

num_modules = int(chassis.get_num_modules(platform_api_conn))
logging.info("Num of moduels: '{}'".format(num_modules))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: spelling mistake

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed it.

'redis-cli -p 6379 -h 127.0.0.1 \
-n 4 hgetall "CHASSIS_MODULE|{}"'.format(dpu))
if output_config_db['stdout'] is None:
logging.warn("redis cli output for chassis module state is empty")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If output_config_db is empty, you need to return False here right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, returning false here. Changed it.

Copy link
Contributor

@vvolam vvolam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of minor suggestions, else LGTM. Thank you!

@nissampa nissampa requested a review from vvolam October 8, 2024 21:07
Copy link
Contributor

@vvolam vvolam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you addressing the comments!

@vvolam vvolam requested a review from prgeor October 8, 2024 21:29
@vvolam
Copy link
Contributor

vvolam commented Oct 9, 2024

/azpw run

1 similar comment
@nissampa
Copy link
Contributor Author

nissampa commented Oct 9, 2024

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@nissampa
Copy link
Contributor Author

@vvolam and @prgeor,

All PR checker passed and comments are addressed.
Can we close on this ?

@vvolam
Copy link
Contributor

vvolam commented Oct 10, 2024

@vvolam and @prgeor,

All PR checker passed and comments are addressed. Can we close on this ?

@nissampa I have approved the PR as it looks good to me.

@nissampa
Copy link
Contributor Author

/AzurePipelines run

Copy link

Commenter does not have sufficient privileges for PR 14152 in repo sonic-net/sonic-mgmt

@nissampa
Copy link
Contributor Author

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Contributor

@bpar9 bpar9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@bpar9 bpar9 merged commit 3ceada7 into sonic-net:master Oct 17, 2024
15 checks passed
vikshaw-Nokia pushed a commit to vikshaw-Nokia/sonic-mgmt that referenced this pull request Oct 23, 2024
* Added DPU platform test cases

* modified imports for flake8 test

* Changed dirrectory structure for smartswitch test scripts

* adderessed few comments

* moved tests scripts to platform_tests dir under smartswitch

* addressed few set of comments in PR

* added logging info

* changed ping check

* addressed set of comments on PR

* flake8 errors resolved for smartswitch/common directory

* flake8 error resolved for platform_tests/test_reload_dpu.py

* flake8 error resolved for platform_tests/test_show_platform_dpu.py

* flake8 error resolved for platform_tests/test_show_platform_dpu.py

* flake8 error resolved for platform_tests/test_show_platform_dpu.py

* added test skip fixture for smartswitch and non-dark mode

* utils flak8 fix

* added platform file as variable

* missed the \ in added code for platform file

* added smartswtich and darkmode detection and lighting up dpus

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* resolved flake8 errors

* added comments for util functions

* Made clear with utility function descriptions

* retified the error

* num of dpu modules fixtures

* resolved flake8 errors

* resolved flake8 errors

* changed link flap case to accomodate bridge interface

* removed link flap case

* resolved flake8 error

* eof fixed

* resolving pre-commit check

* Addressed set of comments on PR

* minor mistakes

* added extra line to PR checker

* minor change for PR checker

* removed functions parameter which are not fixtures
sreejithsreekumaran pushed a commit to sreejithsreekumaran/sonic-mgmt that referenced this pull request Nov 15, 2024
* Added DPU platform test cases

* modified imports for flake8 test

* Changed dirrectory structure for smartswitch test scripts

* adderessed few comments

* moved tests scripts to platform_tests dir under smartswitch

* addressed few set of comments in PR

* added logging info

* changed ping check

* addressed set of comments on PR

* flake8 errors resolved for smartswitch/common directory

* flake8 error resolved for platform_tests/test_reload_dpu.py

* flake8 error resolved for platform_tests/test_show_platform_dpu.py

* flake8 error resolved for platform_tests/test_show_platform_dpu.py

* flake8 error resolved for platform_tests/test_show_platform_dpu.py

* added test skip fixture for smartswitch and non-dark mode

* utils flak8 fix

* added platform file as variable

* missed the \ in added code for platform file

* added smartswtich and darkmode detection and lighting up dpus

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* rectified the syntax error

* resolved flake8 errors

* added comments for util functions

* Made clear with utility function descriptions

* retified the error

* num of dpu modules fixtures

* resolved flake8 errors

* resolved flake8 errors

* changed link flap case to accomodate bridge interface

* removed link flap case

* resolved flake8 error

* eof fixed

* resolving pre-commit check

* Addressed set of comments on PR

* minor mistakes

* added extra line to PR checker

* minor change for PR checker

* removed functions parameter which are not fixtures
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants