-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast-reboot Flow Improvements HLD #980
Fast-reboot Flow Improvements HLD #980
Conversation
Signed-off-by: Shlomi Bitton <shlomibi@nvidia.com>
8498931
to
8837dc2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[I haven’t followed SONiC a lot (outside the Linux kernel changes) lately, so please excuse my ignorance and ignore my comments, if they do not apply.]
Thank you for drafting this. Unfortunately, it’s quite hard to review, as the sentences do not line-wrap, and the text still has some style and the wording often is bumpy.
A commit message should also be added with a very short summary of the goal. Maybe with an example of the analysis of the state of a current system, so it’s clear what needs to be improved.
|
||
# 1 Overview | ||
|
||
The goal of SONiC fast-reboot is to be able to restart and upgrade SONiC software with a data plane disruption less than 30 seconds and control plane less than 90 seconds. Today we don't have any indication of the fast-reboot status and some flows are delayed with a timer because of it, like enablement of flex counters. In order to have such indicator, re-use of the fastfast-reboot infrastructure can be used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Where does 30 s and 90 s come from? Sounds not very fast at all? Probably should be explained further below also with data of the current timings.
- Please link to the timer for the flex counters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- This is MSFT requirements, we can discuss it on the design review and clarify the motivation behind these numbers.
- Added
|
||
The goal of SONiC fast-reboot is to be able to restart and upgrade SONiC software with a data plane disruption less than 30 seconds and control plane less than 90 seconds. Today we don't have any indication of the fast-reboot status and some flows are delayed with a timer because of it, like enablement of flex counters. In order to have such indicator, re-use of the fastfast-reboot infrastructure can be used. | ||
|
||
Each network application will experience similar processing flow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a hard time to understand the relation in the paragraph. Can you rephrase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. is it clear now?
If not can you specify what is not clear?
The goal of SONiC fast-reboot is to be able to restart and upgrade SONiC software with a data plane disruption less than 30 seconds and control plane less than 90 seconds. Today we don't have any indication of the fast-reboot status and some flows are delayed with a timer because of it, like enablement of flex counters. In order to have such indicator, re-use of the fastfast-reboot infrastructure can be used. | ||
|
||
Each network application will experience similar processing flow. | ||
Application and corresponding orchagent sub modules need to work together to restore the original data and push it to the ASIC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
neighsyncd, fpmsyncd etc...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor - instead of original data
, prefer preboot state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
Each network application will experience similar processing flow. | ||
Application and corresponding orchagent sub modules need to work together to restore the original data and push it to the ASIC. | ||
Take neighbor as an example, upon restart operation every neighbor we had prior the reboot should be created again after resetting the ASIC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prior to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
Application and corresponding orchagent sub modules need to work together to restore the original data and push it to the ASIC. | ||
Take neighbor as an example, upon restart operation every neighbor we had prior the reboot should be created again after resetting the ASIC. | ||
We should also synchronize the actual neighbor state after recovering it, the MAC of the neighbor could have changed, went down for some reason etc. | ||
In this case, restore_neighbors.py script will align the network state with the switch state by sending ARP/NDP to all known neighbors prior the reboot. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please mark up and link to restore_neighbors.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
- Recover all applications state with the new image to the previous state prior the reboot. | ||
- Recover ASIC state after reset to the previous state prior the reboot. | ||
- Recover the Kernel state after reset to the previous state prior the reboot. | ||
- Synd the Kernel and ASIC with changes on the network which happen during fast-reboot. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sync
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
### SWSS docker | ||
|
||
When swss docker start with the new kernel, all the port/LAG, vlan, interface, arp and route data should be restored from CONFIG DB, APP DB, Linux Kernel and other reliable sources. There could be ARP, FDB changes during the restart window, proper sync processing should be performed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
… SWSS Docker starts …
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
### Syncd docker | ||
|
||
The restart of syncd docker should leave data plane intact until it starts again with the new kernel. After restart, syncd configure the HW with the state prior the reboot by all network applications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
configures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
## 4.1 Orchagent Point Of View | ||
|
||
When orchagent start with the new SONiC image, the same infrastructure we use to reconsile fastfast-boot will start. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- starts
- reconcile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
### NOTICE | ||
|
||
'warmRestoreValidation' might fail the operation just like in fastfast-reboot case, if the way orchagent process an event from the DB is handled differently with the new software version the task will fail to execute and fast-reboot will fail along with it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fastfast?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fastfast is another reboot type we support.
Hi @paulmenzel,
|
[…]
Thank you. I know, but then I cannot comment directly. :(
I did not know. Thank you. (Maybe the commit message could have that added though.)
Can you elaborate on the “advanced reboot scenarios”? My gut says, switches shouldn’t do a lot of “advanced” stuff, as this complicates things, is more error-prone, and often slower. |
Currently we support 3 types of advance reboot:
|
@vaibhavhd could you please review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments.
The goal of SONiC fast-reboot is to be able to restart and upgrade SONiC software with a data plane disruption less than 30 seconds and control plane less than 90 seconds. | ||
With current implementation there is no indication of the fast-reboot status, meaning we don't have a way to determine if the flow has finished or not. | ||
Some feature flows in SONiC are delayed with a timer to keep the CPU dedicated to the fast-reboot init flow for best perforamnce, like enablement of flex counters. | ||
In order to have such indicator, re-use of the fastfast-reboot infrastructure can be used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: can we just call it warm-reboot instead of fastfast-reboot. Warm-reboot is generic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
warm-reboot is indeed generic, but the approach is different.
warm-reboot is divided into 2 implementations, the normal warm-reboot and fastfast-reboot.
Although the CLI command is the same "warm-reboot" under the hood the flow is different.
Call it warm-reboot might be confusing, so this is why I mentioned it as fastfast-reboot - to be more accurate.
Do you agree?
The goal of SONiC fast-reboot is to be able to restart and upgrade SONiC software with a data plane disruption less than 30 seconds and control plane less than 90 seconds. Today we don't have any indication of the fast-reboot status and some flows are delayed with a timer because of it, like enablement of flex counters. In order to have such indicator, re-use of the fastfast-reboot infrastructure can be used. | ||
|
||
Each network application will experience similar processing flow. | ||
Application and corresponding orchagent sub modules need to work together to restore the original data and push it to the ASIC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor - instead of original data
, prefer preboot state
In addition to the recover mechanism, the warmboot-finalizer can be enhanced to finalize fast-reboot as well and introduce a new flag indicating the process is done. | ||
This new flag can be used later on for any functionality which we want to start only after init flow finished in case of fast-reboot. | ||
This is to prevent interference in the fast-reboot reconciliation process and impair the performance, for example enablement of flex counters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: Link known issues and past patch-fixes that pressed for a need of this better design of fast-reboot indicator.
sonic-net/sonic-buildimage#7140
sonic-net/sonic-buildimage#7987
sonic-net/sonic-buildimage#8157
sonic-net/sonic-buildimage#7965
sonic-net/sonic-buildimage#8117
sonic-net/sonic-swss#1803
sonic-net/sonic-swss#1804
- Recover the Kernel state after reset to the previous state prior the reboot. | ||
- Sync the Kernel and ASIC with changes on the network which happen during fast-reboot. | ||
- Control plane downtime will not exceed 90 seconds. | ||
- Data plane downtime will not exceed 30 seconds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these requirements are already mostly being met.
Do you want to instead only emphasize the new-requirements that are expected to be met by new design? Like "Do not exceed CPU/mem consumption during bootup path until fast-reboot flow is finished.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought it will be better to gather all requirements, even the ones which are already satisfied with previous implementation since we are changing it to a new one.
I think it is important to mention all when the implementation is different.
What do you think?
### NOTICE | ||
|
||
'warmRestoreValidation' might fail the operation just like in fastfast-reboot case, if the way orchagent process an event from the DB is handled differently with the new software version the task will fail to execute and fast-reboot will fail along with it. | ||
This is solvable by the db migrator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't true, DB_MIGRATOR cannot solve all the issues. We still continue to have failures when unexpected APPLY_VIEW operations are performed, or when SAI objects (or attr) change between old and new software versions. Or OID mismatches.
Will this design create potential for new issues in fast-reboot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First this is based on fastfast-reboot infra and not warm-reboot infra, so temp view is not used at all and INIT/APPLY view does nothing basically.
For DB migration and OID mismatches, this could be resolved by a modification to the way we backup the DB.
This should also address the problem when fast rebooting from a EOS to SONiC, as we discussed on mail.
From the mail thread:
_If I understand correctly, we basically want to keep the json dump files we use on the current design instead of a Redis rdb backup file and it will be backward compatible along with this scenario as well.
So I am thinking maybe instead of taking a backup copy of Redis DB, keep the current approach we push these dumps to the DB with swssconfig tool by swssconfig.sh.
We can still use the fastfast-reboot reconciliation logic on each network application and orchagent since they will start after, and all data will be present in the DB at this point.
What do you think?
|
||
## 4.2 Syncd Point Of View - INIT/APPLY view framework | ||
|
||
Syncd starts with the fast-reboot flag, trigger the ASIC reset when create_switch is requested from orchagent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anything changing in the new design wrt to syncd and orchagent flow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only change here is to disable temp view when running syncd since it is not needed.
The ASIC DB is empty at this point and we have only one view since we reset the ASIC.
## 4.5 Reboot finalizer | ||
|
||
Today we have a tool used for warm-reboot to collect all reconsiliation flags from the different network applications. | ||
This tool can be enhanced to consider fast-reboot as well and introduce a new flag indicating the end of the process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need new flag? We already have a flag to detect fast-reboot. Example:
https://github.com/Azure/sonic-buildimage/blob/bc305283417a1c305bce4035547191e1d480c1d5/files/scripts/swss.sh#L65
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This flag is a dummy flag and actually I am not sure it is even used somewhere.
It is a flag with expired timer of 180 seconds, the one referred on this HLD is in sync with all network applications going through a reconciliation process.
I am referring to the WARM_REBOOT_TABLE on State DB, what we can see by running : "show warm_restart state"
Address the scenario of upgrading to SONiC from a vendor NOS.
@vaibhavhd could you please help to review the recent changes? |
- On this scenario all should work exacly the same as the switch rebooted from SONiC to SONiC. | ||
|
||
- Dump files of default gateway, neighbors and fdb tables are not provided to the new image as SONiC does prior the reboot. | ||
- On this scenario fast-reboot will finish successfully, but with low performance since all neighbors and fdb entries will be created by the slow path. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The claim here is that without dump files the performance will be degraded. What metrics are you referring to?
Additionally, I see that without dump files, the downtime is >30s, and the test failed. I believe that is unexpected.
2022-05-18 15:56:33 : FAILED:dut:Total downtime period must be less then 0:00:30 seconds. It was 48.5044789314
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What platform was tested here? Even for good-case scenario the downtime is 28s, which is concerningly close to 30s SLA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vaibhavhd what do you mean by metrics?
This is the current results we have with the current implementation, these test results added to the HLD as you requested but actually not related to this new feature.
The new design will support the flow you are requesting but as for performance for the new flow, if this is not suffice it should be handled as a different feature, We can discuss it and add it as FR.
If you want we can have a discussion today to discuss it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By metrics I was meaning to ask what you meant by low performance
in this line On this scenario fast-reboot will finish successfully, but with low performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the failure case (48s downtime), I did not notice that this was from current implementation. I assumed you tested it based on your local changes for new design.
# 4 Reconciliation at syncd | ||
|
||
## 4.1 Orchagent Point Of View | ||
|
||
If dump files provided by the previous image prior the reboot, all tables should be pushed to APP DB for reconciling orchagent. | ||
If no dumps are provided, orchagent will reconcile with no information from prior the reboot. | ||
On this case all ARP and FDB entries will be created by the slow path. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also describe what is meant by slow-path here for better clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@shlomibitton can you please add the code PRs by following EVPN VxLAN update for platforms using P2MP tunnel based L2 forwarding by dgsudharsan · Pull Request #806 · Azure/SONiC (github.com) as an example |
code PRs need be updated. @liat-grozovik |
Related PRs:
master
202205