-
Notifications
You must be signed in to change notification settings - Fork 781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dualtor] Restart linkmgrd before toggle #16878
Merged
Merged
+7
−0
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Longxiang Lyu <[email protected]>
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
yxieca
approved these changes
Feb 10, 2025
4 tasks
mssonicbld
pushed a commit
to mssonicbld/sonic-mgmt
that referenced
this pull request
Feb 28, 2025
What is the motivation for this PR? Dualtor I/O tests are flaky due to the DUT is stuck in heartbeat suspension, so the DUT cannot react to any failure scenarios, this is causing some nightly failures like the following: the heartbeat suspension leaves the mux port in unhealthy state: E Failed: Database states don't match expected state standby,incorrect STATE_DB values { E "MUX_LINKMGR_TABLE|Ethernet48": { E "state": "unhealthy" E } E } if there is a failure triggered during the heartbeat suspension, the DUT will fail to toggle to the correct state: E Failed: Database states don't match expected state active,incorrect APP_DB values { E "HW_MUX_CABLE_TABLE:Ethernet60": { E "state": "standby" E }, E "MUX_CABLE_TABLE:Ethernet60": { E "state": "standby" E } E } why the DUT cannot react to failure scenarios when stuck in heartbeat suspension? When icmp_responder is not running, the active side is stuck in the following loop waiting the peer to take over; and the heartbeat suspension will backoff with max timeout as 128 * 100ms = 51s. If the icmp_responder is started and testcase continues to run with a failure scenario like link drop in this period, the DUT cannot detect the link drop failure as the active side is still in heartbeat suspension. # (unknown, active, up) ----------------------------> (unknown, wait, up) # ^ suspend timeout, probe mux | # | | # | | # | | # +---------------------------------------------------+ # mux active, suspend heartbeat with backoff why kvm PR tests hit this issue more often? This issue is easier reproducible when the active side reaches the maximum suspension timeout - 51s. This can be easily achieved in the kvm PR tests because the dualtor I/O testcases are executed right after the pretests(no icmp_responder is running) and the DUT can easily backoff the suspension timeout to the maximum. On physical testbed, the dualtor I/O testcases are interleaved with other testcases that might start the icmp_responder. How did you do it? Let's restart linkmgrd in the toggle fixture so all mux ports will come out of the heartbeat suspension state. How did you verify/test it? dualtor_io/test_heartbeat_failure.py::test_active_tor_heartbeat_failure_upstream[active-standby] PASSED [100%] Signed-off-by: Longxiang Lyu <[email protected]>
Cherry-pick PR to 202411: #17241 |
11 tasks
mssonicbld
pushed a commit
that referenced
this pull request
Mar 3, 2025
What is the motivation for this PR? Dualtor I/O tests are flaky due to the DUT is stuck in heartbeat suspension, so the DUT cannot react to any failure scenarios, this is causing some nightly failures like the following: the heartbeat suspension leaves the mux port in unhealthy state: E Failed: Database states don't match expected state standby,incorrect STATE_DB values { E "MUX_LINKMGR_TABLE|Ethernet48": { E "state": "unhealthy" E } E } if there is a failure triggered during the heartbeat suspension, the DUT will fail to toggle to the correct state: E Failed: Database states don't match expected state active,incorrect APP_DB values { E "HW_MUX_CABLE_TABLE:Ethernet60": { E "state": "standby" E }, E "MUX_CABLE_TABLE:Ethernet60": { E "state": "standby" E } E } why the DUT cannot react to failure scenarios when stuck in heartbeat suspension? When icmp_responder is not running, the active side is stuck in the following loop waiting the peer to take over; and the heartbeat suspension will backoff with max timeout as 128 * 100ms = 51s. If the icmp_responder is started and testcase continues to run with a failure scenario like link drop in this period, the DUT cannot detect the link drop failure as the active side is still in heartbeat suspension. # (unknown, active, up) ----------------------------> (unknown, wait, up) # ^ suspend timeout, probe mux | # | | # | | # | | # +---------------------------------------------------+ # mux active, suspend heartbeat with backoff why kvm PR tests hit this issue more often? This issue is easier reproducible when the active side reaches the maximum suspension timeout - 51s. This can be easily achieved in the kvm PR tests because the dualtor I/O testcases are executed right after the pretests(no icmp_responder is running) and the DUT can easily backoff the suspension timeout to the maximum. On physical testbed, the dualtor I/O testcases are interleaved with other testcases that might start the icmp_responder. How did you do it? Let's restart linkmgrd in the toggle fixture so all mux ports will come out of the heartbeat suspension state. How did you verify/test it? dualtor_io/test_heartbeat_failure.py::test_active_tor_heartbeat_failure_upstream[active-standby] PASSED [100%] Signed-off-by: Longxiang Lyu <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of PR
Summary:
Fixes # (issue)
Type of change
Back port request
Approach
What is the motivation for this PR?
Dualtor I/O tests are flaky due to the DUT is stuck in heartbeat suspension, so the DUT cannot react to any failure scenarios, this is causing some nightly failures like the following:
unhealthy
state:why the DUT cannot react to failure scenarios when stuck in heartbeat suspension?
When
icmp_responder
is not running, the active side is stuck in the following loop waiting the peer to take over; and the heartbeat suspension will backoff with max timeout as 128 * 100ms = 51s. If theicmp_responder
is started and testcase continues to run with a failure scenario like link drop in this period, the DUT cannot detect the link drop failure as the active side is still in heartbeat suspension.why kvm PR tests hit this issue more often?
This issue is easier reproducible when the active side reaches the maximum suspension timeout - 51s. This can be easily achieved in the kvm PR tests because the dualtor I/O testcases are executed right after the pretests(no
icmp_responder
is running) and the DUT can easily backoff the suspension timeout to the maximum.On physical testbed, the dualtor I/O testcases are interleaved with other testcases that might start the
icmp_responder
.How did you do it?
Let's restart
linkmgrd
in the toggle fixture so all mux ports will come out of the heartbeat suspension state.How did you verify/test it?
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation