-
Notifications
You must be signed in to change notification settings - Fork 727
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dynamic buffer calc] Test cases for dynamic buffer calculation #1971
[Dynamic buffer calc] Test cases for dynamic buffer calculation #1971
Conversation
@neethajohn will you be able to review this PR? this is a good enhancement IMO which can help to run on different SKUs same test. |
Hi @liat-grozovik |
This PR depends on PR #2418 |
This pull request introduces 1 alert when merging 549c00faf5d44a9d1176731ded5cac8848904ac1 into 9507f72 - view on LGTM.com new alerts:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will the values for xon, xoff and headroom that are configured as part of various tests work for all vendors?
Other minor comments:
Please use pytest_assert instead of assert.
Use google style doc strings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix typo [Dyanmic] to [Dynamic]
1. Add test cases for dynamic buffer calculation - changing speed/cable length test - headroom override - additional lossless PG - exceeding the accumulative headroom size of a port - default buffer model 2. Adjust qos test, fetching configuration from: - APPL_DB if there is buffer_model defined, this is the case in master - CONF_DB otherwise, for 201911 3. Additional checking script to test whether the buffer model is as expected. This can be used by warm-reboot Signed-off-by: Stephen Sun <stephens@nvidia.com>
use pytest_assert and google style doc strings verify buffer profile and pool against ASIC_DB move hardcoded parameters to a predefined json file add comments to make it easy to understand Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
**What I did** ***Support dynamic buffer calculation*** 1. Extend the CLI options for buffermgrd: - -a: asic_table provided, - -p: peripheral_table provided The `buffermgrd` will start the dynamic headroom calculation mode with -a provided. Otherwise, it will start the legacy mode (pg_headroom_profile looking up) 2. A new class is provided for dynamic buffer calculation while the old one remains. The daemon will instantiate the corresponding class according to the CLI option when it starts. 3. In both modes, the `buffermgrd` will copy BUFFER_XXX tables from CONFIG_DB to APPL_DB and the `bufferorch` will consume BUFFER_XXX tables from APPL_DB ***Backward compatibility*** For legacy mode, the backward compatibility is provided. As mentioned above, `buffermgrd` will check whether the json file representing the `ASIC_TABLE` exists when it starts. - If yes it will start the dynamic buffer calculating mode - Otherwise, it will start the compatible mode which is the old looking up mode in the new code committed in this PR. This logic is in `cfgmgr/buffermgrd.cpp`. The logic of buffer handling in `buffermgrd` isn't changed in the legacy mode. The differences are: - in legacy mode which is the old code, there isn't any buffer related table in `APPL_DB`. All tables are in `CONFIG_DB`. - `buffermgrd` listens to `PORT` and `CABLE_LENGTH` tables in `CONFIG_DB` and inserts the buffer profiles into `BUFFER_PROFILE` table. - `bufferorch` listens to buffer related tables in `CONFIG_DB` and call SAI API correspondingly. - In the compatible mode, `buffermgrd` listens to tables in `CONFIG_DB` and copies them into `APPL_DB` - `buffermgrd` - listens to `PORT` and `CABLE_LENGTH` tables in `CONFIG_DB` and inserts the buffer profiles into `BUFFER_PROFILE` table in `CONFIG_DB` (not changed) - listens to buffer related tables in `CONFIG_DB` and copies them into `APPL_DB` - `bufferorch` listens to `APPL_DB` and call SAI API correspondingly. (the difference is the db it listens to). - `db_migrator` is responsible to copy the buffer related tables from `CONFIG_DB` to `APPL_DB` when system is warmbooted from the old image to the new image for the first time. The compatible code is in `cfgmgr/buffermgr.cpp`, `orchagent/bufferorch.cpp` and `db_migrator` (in the [sonic-utilities PR](sonic-net/sonic-utilities#973)). **Why I did it** **How I verified it** 1. vs test 2. regression test [PR: [Dynamic buffer calc] Test cases for dynamic buffer calculation](sonic-net/sonic-mgmt#1971) **Dynamic buffer details** 1. In the dynamic buffer calculation mode, there are 3 lua plugins are provided for vendor-specific operations: - buffer_headroom_<vendor>.lua, for calculating headroom size. - buffer_pool_<vendor>.lua, for calculating buffer pool size. - buffer_check_headroom_<vendor>.lua, for checking whether headroom exceeds the limit 2. During initialization, The daemon will: - load asic_table and peripheral_table from the given json file, parse them and push them into STATE_DB.ASIC_TABLE and STATE_DB.PERIPHERAL_TABLE respectively - load all plugins - try to load the STATE_DB.BUFFER_MAX_PARAM.mmu_size which is used for updating buffer pool size - a timer will be started for periodic buffer pool size audit 3. The daemon will listen to and handle the following tables from CONFIG_DB The tables will be cached internally in the daemon for the purpose of saving access time - BUFFER_POOL: - if the size is provided: insert the entry to APPL_DB - otherwise: cache them and push to APPL_DB after the size is calculated by lua plugin - BUFFER_PROFILE and BUFFER_PG: - items for ingress lossless headroom need to be cached and handled (according to the design) - other items will be inserted to the APPL_DB directly - PORT_TABLE, for ports' speed and MTU update - CABLE_LENGTH, for ports' cable length 4. Other tables will be copied to APPL_DB directly: - BUFFER_QUEUE - BUFFER_PORT_INGRESS_PROFILE_LIST - BUFFER_PORT_EGRESS_PROFILE_LIST 5. BufferOrch modified accordingly: - Consume buffer relevant tables from APPL_DB instead of CONFIG_DB - For BUFFER_POOL, don't set ingress/egress and static/dynamic to sai if the pool has already existed because they are create-only - For BUFFER_PROFILE, don't set pool for the same reason 6. Warm reboot: - db_migrator is responsible for copying the data from CONFIG_DB to APPL_DB if the switch is warm-rebooted from an old image to the new image for the first time - no specific handling in the daemon side 7. Provide vstest script
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
@neethajohn would you like to review and provide comments or should I go and merge it? |
It's more convenent to leave it as a class member for users Signed-off-by: Stephen Sun <stephens@nvidia.com>
retest vsimage please |
- Use capital letters for global variable names - Remove spaces between "=" in argument list Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Description of PR
Summary:
Support test cases for dynamic buffer calculation
Type of change
Approach
What is the motivation for this PR?
Support test cases for dynamic buffer calculation
How did you do it?
To test whether the buffer configuration (
BUFFER_POOL
,BUFFER_PROFILE
,BUFFER_PG
) has been correctly deployed to APPL_DB on receiving the following event:QoS test has been updated accordingly (fetch tables from tables in
APPL_DB
instead ofCONFIG_DB
How did you verify/test it?
Any platform specific information?
Supported testbed topology if it's a new test case?
all topo.
Documentation