-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update snmpd.conf.j2 prolong agentXTimeout to avoid timeout failure in high CPU utilization scenario #21316
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…n high CPU utilization scenario
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
This was referenced Jan 2, 2025
/run ms_conflict |
/azpw ms_conflict |
1 similar comment
/azpw ms_conflict |
qiluo-msft
approved these changes
Jan 8, 2025
8 tasks
This was referenced Jan 8, 2025
Cherry-pick PR to 202411: #21349 |
Cherry-pick PR to 202405: #21350 |
mssonicbld
added a commit
to mssonicbld/sonic-snmpagent
that referenced
this pull request
Jan 14, 2025
<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> Fix sonic-net/sonic-buildimage#21314 The SNMP agent and MIB updaters are basic on Asyncio/Coroutine, the mib updaters share the same Asyncio event loop with the SNMP agent client. Hence during the updaters executing, the agent client can't receive/respond to new requests. When the CPU utilization is high (In some stress test we make CPU 100% utilization), the updates are slow, and this causes the snmpd request to be timeout because the agent got suspended during updating. **- What I did** Decrease the MIB update frequency when the update execution is slow. pros: 1. The snmp request can success even if 100% CPU utilization. 2. The snmpd request seldomly fails due to timeout, combined with sonic-net/sonic-buildimage#21316 , we have 4*5 = 20s time windows for the SNMP agent to wait for the MIB updates finish and respond to snmpd request. 3. Relief the CPU cost when CPU is high, the can avoid CPU becomes more crowded. cons: 1. Tested on pizzabox (4600c), the updaters are very fast, generally finished within 0.001~0.02s, the chagne won't actually affect the frequency and interval. 2. On Cisco chassis, the update of SNMP data could be delayed for 10~20s(at most 60s in extreme situation). Per my oberservation, most of the updater finishes within 0.5s. But for 1.a ciscoSwitchQosMIB.QueueStatUpdater, generally finishs in 0.5-2s, expected to be delayed to 5-20s 1.b PhysicalTableMIBUpdater, generally finishs in 0.5-1.5s, expected to be delayed to 5-1.5s 1.c ciscoPfcExtMIB.PfcPrioUpdater, generally finishs in 0.5-3s, expected to be delayed to 5-30s **- How I did it** In get_next_update_interval, we compute the interval based on current execution time. Roughly, we make the 'update interval'/'update execution time' >= UPDATE_FREQUENCY_RATE(10) More specifically, if the execution time is 2.000001s, we sleep 21s before next update round. And the max interval won't be longer than MAX_UPDATE_INTERVAL(60s). **- How to verify it** Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->
mssonicbld
added a commit
to mssonicbld/sonic-snmpagent
that referenced
this pull request
Jan 14, 2025
<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> Fix sonic-net/sonic-buildimage#21314 The SNMP agent and MIB updaters are basic on Asyncio/Coroutine, the mib updaters share the same Asyncio event loop with the SNMP agent client. Hence during the updaters executing, the agent client can't receive/respond to new requests. When the CPU utilization is high (In some stress test we make CPU 100% utilization), the updates are slow, and this causes the snmpd request to be timeout because the agent got suspended during updating. **- What I did** Decrease the MIB update frequency when the update execution is slow. pros: 1. The snmp request can success even if 100% CPU utilization. 2. The snmpd request seldomly fails due to timeout, combined with sonic-net/sonic-buildimage#21316 , we have 4*5 = 20s time windows for the SNMP agent to wait for the MIB updates finish and respond to snmpd request. 3. Relief the CPU cost when CPU is high, the can avoid CPU becomes more crowded. cons: 1. Tested on pizzabox (4600c), the updaters are very fast, generally finished within 0.001~0.02s, the chagne won't actually affect the frequency and interval. 2. On Cisco chassis, the update of SNMP data could be delayed for 10~20s(at most 60s in extreme situation). Per my oberservation, most of the updater finishes within 0.5s. But for 1.a ciscoSwitchQosMIB.QueueStatUpdater, generally finishs in 0.5-2s, expected to be delayed to 5-20s 1.b PhysicalTableMIBUpdater, generally finishs in 0.5-1.5s, expected to be delayed to 5-1.5s 1.c ciscoPfcExtMIB.PfcPrioUpdater, generally finishs in 0.5-3s, expected to be delayed to 5-30s **- How I did it** In get_next_update_interval, we compute the interval based on current execution time. Roughly, we make the 'update interval'/'update execution time' >= UPDATE_FREQUENCY_RATE(10) More specifically, if the execution time is 2.000001s, we sleep 21s before next update round. And the max interval won't be longer than MAX_UPDATE_INTERVAL(60s). **- How to verify it** Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->
mssonicbld
added a commit
to sonic-net/sonic-snmpagent
that referenced
this pull request
Jan 14, 2025
<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> Fix sonic-net/sonic-buildimage#21314 The SNMP agent and MIB updaters are basic on Asyncio/Coroutine, the mib updaters share the same Asyncio event loop with the SNMP agent client. Hence during the updaters executing, the agent client can't receive/respond to new requests. When the CPU utilization is high (In some stress test we make CPU 100% utilization), the updates are slow, and this causes the snmpd request to be timeout because the agent got suspended during updating. **- What I did** Decrease the MIB update frequency when the update execution is slow. pros: 1. The snmp request can success even if 100% CPU utilization. 2. The snmpd request seldomly fails due to timeout, combined with sonic-net/sonic-buildimage#21316 , we have 4*5 = 20s time windows for the SNMP agent to wait for the MIB updates finish and respond to snmpd request. 3. Relief the CPU cost when CPU is high, the can avoid CPU becomes more crowded. cons: 1. Tested on pizzabox (4600c), the updaters are very fast, generally finished within 0.001~0.02s, the chagne won't actually affect the frequency and interval. 2. On Cisco chassis, the update of SNMP data could be delayed for 10~20s(at most 60s in extreme situation). Per my oberservation, most of the updater finishes within 0.5s. But for 1.a ciscoSwitchQosMIB.QueueStatUpdater, generally finishs in 0.5-2s, expected to be delayed to 5-20s 1.b PhysicalTableMIBUpdater, generally finishs in 0.5-1.5s, expected to be delayed to 5-1.5s 1.c ciscoPfcExtMIB.PfcPrioUpdater, generally finishs in 0.5-3s, expected to be delayed to 5-30s **- How I did it** In get_next_update_interval, we compute the interval based on current execution time. Roughly, we make the 'update interval'/'update execution time' >= UPDATE_FREQUENCY_RATE(10) More specifically, if the execution time is 2.000001s, we sleep 21s before next update round. And the max interval won't be longer than MAX_UPDATE_INTERVAL(60s). **- How to verify it** Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->
mssonicbld
added a commit
to sonic-net/sonic-snmpagent
that referenced
this pull request
Jan 14, 2025
<!-- Please make sure you've read and understood our contributing guidelines; https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" Please provide the following information: --> Fix sonic-net/sonic-buildimage#21314 The SNMP agent and MIB updaters are basic on Asyncio/Coroutine, the mib updaters share the same Asyncio event loop with the SNMP agent client. Hence during the updaters executing, the agent client can't receive/respond to new requests. When the CPU utilization is high (In some stress test we make CPU 100% utilization), the updates are slow, and this causes the snmpd request to be timeout because the agent got suspended during updating. **- What I did** Decrease the MIB update frequency when the update execution is slow. pros: 1. The snmp request can success even if 100% CPU utilization. 2. The snmpd request seldomly fails due to timeout, combined with sonic-net/sonic-buildimage#21316 , we have 4*5 = 20s time windows for the SNMP agent to wait for the MIB updates finish and respond to snmpd request. 3. Relief the CPU cost when CPU is high, the can avoid CPU becomes more crowded. cons: 1. Tested on pizzabox (4600c), the updaters are very fast, generally finished within 0.001~0.02s, the chagne won't actually affect the frequency and interval. 2. On Cisco chassis, the update of SNMP data could be delayed for 10~20s(at most 60s in extreme situation). Per my oberservation, most of the updater finishes within 0.5s. But for 1.a ciscoSwitchQosMIB.QueueStatUpdater, generally finishs in 0.5-2s, expected to be delayed to 5-20s 1.b PhysicalTableMIBUpdater, generally finishs in 0.5-1.5s, expected to be delayed to 5-1.5s 1.c ciscoPfcExtMIB.PfcPrioUpdater, generally finishs in 0.5-3s, expected to be delayed to 5-30s **- How I did it** In get_next_update_interval, we compute the interval based on current execution time. Roughly, we make the 'update interval'/'update execution time' >= UPDATE_FREQUENCY_RATE(10) More specifically, if the execution time is 2.000001s, we sleep 21s before next update round. And the max interval won't be longer than MAX_UPDATE_INTERVAL(60s). **- How to verify it** Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well. **- Description for the changelog** <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: -->
VladimirKuk
pushed a commit
to Marvell-switching/sonic-buildimage
that referenced
this pull request
Jan 21, 2025
…n high CPU utilization scenario (sonic-net#21316) Why I did it Fix sonic-net#21314 Update and prolong the timeout of the requests between snmpd and SNMP AgentX. In SONiC SNMP AgentX, the MIB updaters and AgentX client shares the same AsyncIO/Coroutine event loop. During the MIB updaters update the SNMP values, the AgentX client can't respond to the snmpd request. The default value of snmpd request is 1s(timeout) * 5(retries) When the CPU is high, the MIB updaters are slow, 1s timeout is not enough, even if it retries 5 times. Hence update to 5s(timeout) * 4(retries), the time windows = 20s, which makes sure the SNMP request can be handled even with 100% CPU utilization. Work item tracking Microsoft ADO 30112399: How I did it Update the default value(https://linux.die.net/man/5/snmpd.conf): agentXTimeout 1(default value) -> 5 agentXRetries 5(default value) -> 4 How to verify it Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why I did it
Fix #21314
Update and prolong the timeout of the requests between snmpd and SNMP AgentX.
In SONiC SNMP AgentX, the MIB updaters and AgentX client shares the same AsyncIO/Coroutine event loop.
During the MIB updaters update the SNMP values, the AgentX client can't respond to the snmpd request.
The default value of snmpd request is 1s(timeout) * 5(retries)
When the CPU is high, the MIB updaters are slow, 1s timeout is not enough, even if it retries 5 times.
Hence update to 5s(timeout) * 4(retries), the time windows = 20s, which makes sure the SNMP request can be handled even with 100% CPU utilization.
Work item tracking
How I did it
Update the default value(https://linux.die.net/man/5/snmpd.conf):
agentXTimeout 1(default value) -> 5
agentXRetries 5(default value) -> 4
How to verify it
Test on Cisco chassis, test_snmp_cpu.py which triggers 100% CPU utilization test whether snmp requests work well.
Which release branch to backport (provide reason below if selected)
Tested branch (Please provide the tested image version)
Description for the changelog
Link to config_db schema for YANG module changes
A picture of a cute animal (not mandatory but encouraged)