Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dualtor] Restart linkmgrd before toggle #16878

Merged
merged 1 commit into from
Feb 11, 2025

Conversation

lolyu
Copy link
Contributor

@lolyu lolyu commented Feb 10, 2025

Description of PR

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • New Test case
    • Skipped for non-supported platforms
  • Test case improvement

Back port request

  • 202012
  • 202205
  • 202305
  • 202311
  • 202405
  • 202411

Approach

What is the motivation for this PR?

Dualtor I/O tests are flaky due to the DUT is stuck in heartbeat suspension, so the DUT cannot react to any failure scenarios, this is causing some nightly failures like the following:

  1. the heartbeat suspension leaves the mux port in unhealthy state:
E       Failed: Database states don't match expected state standby,incorrect STATE_DB values {
E           "MUX_LINKMGR_TABLE|Ethernet48": {
E               "state": "unhealthy"
E           }
E       }
  1. if there is a failure triggered during the heartbeat suspension, the DUT will fail to toggle to the correct state:
E       Failed: Database states don't match expected state active,incorrect APP_DB values {
E           "HW_MUX_CABLE_TABLE:Ethernet60": {
E               "state": "standby"
E           },
E           "MUX_CABLE_TABLE:Ethernet60": {
E               "state": "standby"
E           }
E       }
why the DUT cannot react to failure scenarios when stuck in heartbeat suspension?

When icmp_responder is not running, the active side is stuck in the following loop waiting the peer to take over; and the heartbeat suspension will backoff with max timeout as 128 * 100ms = 51s. If the icmp_responder is started and testcase continues to run with a failure scenario like link drop in this period, the DUT cannot detect the link drop failure as the active side is still in heartbeat suspension.

# (unknown, active, up) ----------------------------> (unknown, wait, up)
#         ^              suspend timeout, probe mux           |          
#         |                                                   |          
#         |                                                   |          
#         |                                                   |          
#         +---------------------------------------------------+          
#                mux active, suspend heartbeat with backoff
why kvm PR tests hit this issue more often?

This issue is easier reproducible when the active side reaches the maximum suspension timeout - 51s. This can be easily achieved in the kvm PR tests because the dualtor I/O testcases are executed right after the pretests(no icmp_responder is running) and the DUT can easily backoff the suspension timeout to the maximum.
On physical testbed, the dualtor I/O testcases are interleaved with other testcases that might start the icmp_responder.

How did you do it?

Let's restart linkmgrd in the toggle fixture so all mux ports will come out of the heartbeat suspension state.

How did you verify/test it?

dualtor_io/test_heartbeat_failure.py::test_active_tor_heartbeat_failure_upstream[active-standby] PASSED                                                                                                                                                                [100%]

============================================================================================================================== warnings summary ==============================================================================================================================

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

@mssonicbld
Copy link
Collaborator

/azp run

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@wangxin wangxin merged commit 25e9c4b into sonic-net:master Feb 11, 2025
19 checks passed
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Feb 28, 2025
What is the motivation for this PR?
Dualtor I/O tests are flaky due to the DUT is stuck in heartbeat suspension, so the DUT cannot react to any failure scenarios, this is causing some nightly failures like the following:

the heartbeat suspension leaves the mux port in unhealthy state:
E       Failed: Database states don't match expected state standby,incorrect STATE_DB values {
E           "MUX_LINKMGR_TABLE|Ethernet48": {
E               "state": "unhealthy"
E           }
E       }
if there is a failure triggered during the heartbeat suspension, the DUT will fail to toggle to the correct state:
E       Failed: Database states don't match expected state active,incorrect APP_DB values {
E           "HW_MUX_CABLE_TABLE:Ethernet60": {
E               "state": "standby"
E           },
E           "MUX_CABLE_TABLE:Ethernet60": {
E               "state": "standby"
E           }
E       }
why the DUT cannot react to failure scenarios when stuck in heartbeat suspension?
When icmp_responder is not running, the active side is stuck in the following loop waiting the peer to take over; and the heartbeat suspension will backoff with max timeout as 128 * 100ms = 51s. If the icmp_responder is started and testcase continues to run with a failure scenario like link drop in this period, the DUT cannot detect the link drop failure as the active side is still in heartbeat suspension.

# (unknown, active, up) ----------------------------> (unknown, wait, up)
#         ^              suspend timeout, probe mux           |          
#         |                                                   |          
#         |                                                   |          
#         |                                                   |          
#         +---------------------------------------------------+          
#                mux active, suspend heartbeat with backoff
why kvm PR tests hit this issue more often?
This issue is easier reproducible when the active side reaches the maximum suspension timeout - 51s. This can be easily achieved in the kvm PR tests because the dualtor I/O testcases are executed right after the pretests(no icmp_responder is running) and the DUT can easily backoff the suspension timeout to the maximum.
On physical testbed, the dualtor I/O testcases are interleaved with other testcases that might start the icmp_responder.

How did you do it?
Let's restart linkmgrd in the toggle fixture so all mux ports will come out of the heartbeat suspension state.

How did you verify/test it?
dualtor_io/test_heartbeat_failure.py::test_active_tor_heartbeat_failure_upstream[active-standby] PASSED                                                                                                                                                                [100%]


Signed-off-by: Longxiang Lyu <[email protected]>
@mssonicbld
Copy link
Collaborator

Cherry-pick PR to 202411: #17241

mssonicbld pushed a commit that referenced this pull request Mar 3, 2025
What is the motivation for this PR?
Dualtor I/O tests are flaky due to the DUT is stuck in heartbeat suspension, so the DUT cannot react to any failure scenarios, this is causing some nightly failures like the following:

the heartbeat suspension leaves the mux port in unhealthy state:
E       Failed: Database states don't match expected state standby,incorrect STATE_DB values {
E           "MUX_LINKMGR_TABLE|Ethernet48": {
E               "state": "unhealthy"
E           }
E       }
if there is a failure triggered during the heartbeat suspension, the DUT will fail to toggle to the correct state:
E       Failed: Database states don't match expected state active,incorrect APP_DB values {
E           "HW_MUX_CABLE_TABLE:Ethernet60": {
E               "state": "standby"
E           },
E           "MUX_CABLE_TABLE:Ethernet60": {
E               "state": "standby"
E           }
E       }
why the DUT cannot react to failure scenarios when stuck in heartbeat suspension?
When icmp_responder is not running, the active side is stuck in the following loop waiting the peer to take over; and the heartbeat suspension will backoff with max timeout as 128 * 100ms = 51s. If the icmp_responder is started and testcase continues to run with a failure scenario like link drop in this period, the DUT cannot detect the link drop failure as the active side is still in heartbeat suspension.

# (unknown, active, up) ----------------------------> (unknown, wait, up)
#         ^              suspend timeout, probe mux           |          
#         |                                                   |          
#         |                                                   |          
#         |                                                   |          
#         +---------------------------------------------------+          
#                mux active, suspend heartbeat with backoff
why kvm PR tests hit this issue more often?
This issue is easier reproducible when the active side reaches the maximum suspension timeout - 51s. This can be easily achieved in the kvm PR tests because the dualtor I/O testcases are executed right after the pretests(no icmp_responder is running) and the DUT can easily backoff the suspension timeout to the maximum.
On physical testbed, the dualtor I/O testcases are interleaved with other testcases that might start the icmp_responder.

How did you do it?
Let's restart linkmgrd in the toggle fixture so all mux ports will come out of the heartbeat suspension state.

How did you verify/test it?
dualtor_io/test_heartbeat_failure.py::test_active_tor_heartbeat_failure_upstream[active-standby] PASSED                                                                                                                                                                [100%]


Signed-off-by: Longxiang Lyu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants