-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mellanox] Support one ingress pool mode #4686
Conversation
retest vsimage please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5570496 ingress shared buffer size for T1 still seems too small for me
99e7f14
to
1010d2c
Compare
According to the SAI spec, the buffer pool size includes the reserved buffer for all pg/queues. So that would mean we would have the same pool size specified for T1 and T0 templates and internally subtract the reserved space to get the actual shared pool size to allocate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see 'ingress_lossy_profile' being defined in the templates but not used anywhere. Is that intentional?
|
79f3287
to
323ef24
Compare
Do you have buffer templates for 2700 C32 sku? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why different SKUs get different Xoff sizes under the same link speed and cable length?
If they are based on different ASICs, they can have different xoff sizes because of different cell sizes. |
Yes. All the buffer templates of SKUs based 2700 share the same templates.
? |
Got it! What are the cell sizes of 3800 and 4600? |
|
Hi @liat-grozovik @lguohan , |
…lation Calculate pool size in t1 as 24 * downlink port + 8 * uplink port - Take both port and peer MTU into account when calculating headroom - Worst case factor is decreased to 50% - Mellanox-SN2700-C28D8 t0, assume 48 * 50G/5m + 8 * 100G/40m ports - Mellanox-SN2700 (C32) - t0: 16 * 100G/5m + 16 * 100G/40m - t1: 16 * 100G/40m + 16 * 100G/300m Signed-off-by: Stephen Sun <stephens@mellanox.com>
e871991
to
4c0e8ae
Compare
Done. Please review. @baiwei0427 @neethajohn |
@baiwei0427 are we good to go with these values? |
I will do calculations tomorrow and get back to you as soon as possible, |
@baiwei0427 could you also provide the cable length information for calculating pool size for 3800 as well?
thank you! |
Server to T0: 5M |
How many ports are connected to servers, TORs, leaf/spine routers respectively in each of the SKU? |
@liat-grozovik @stephenxs I did calculations for D48C8 T0, SN2700 T0, and SN2700 T1 and got similar results. |
Note: Leaving aside new requirements for SN3800. this PR is for updating SN2700 SKUs. |
cherry-pick has conflict. Create PR for 201911 |
|
…lation (sonic-net#4686) Calculate pool size in t1 as 24 * downlink port + 8 * uplink port - Take both port and peer MTU into account when calculating headroom - Worst case factor is decreased to 50% - Mellanox-SN2700-C28D8 t0, assume 48 * 50G/5m + 8 * 100G/40m ports - Mellanox-SN2700 (C32) - t0: 16 * 100G/5m + 16 * 100G/40m - t1: 16 * 100G/40m + 16 * 100G/300m Signed-off-by: Stephen Sun <stephens@mellanox.com> Co-authored-by: Stephen Sun <stephens@mellanox.com>
Support one ingress pool mode.
Signed-off-by: Stephen Sun stephens@mellanox.com
- Why I did it
Support one ingress pool mode
- How I did it
- How to verify it
- Description for the changelog
- A picture of a cute animal (not mandatory but encouraged)