-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lnwallet: update RBF state machine w/ latest spec guidelines #9568
lnwallet: update RBF state machine w/ latest spec guidelines #9568
Conversation
Important Review skippedAuto reviews are limited to specific labels. 🏷️ Labels to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
18680c1
to
911bf10
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for breaking this up, def made the review process easier🙏
Mostly questions, plus I found the logging needs to be updated after running the unit tests, which has a trivial fix here, #9571
Reviewed 2 of 2 files at r1, 1 of 1 files at r2, 4 of 4 files at r3, 3 of 3 files at r4, 2 of 2 files at r5, 3 of 3 files at r6, 2 of 2 files at r7, all commit messages.
Reviewable status: all files reviewed, 10 unresolved discussions (waiting on @Crypt-iQ)
lnwallet/chancloser/rbf_coop_test.go
line 1918 at r7 (raw file):
startingState := &ClosingNegotiation{ PeerState: lntypes.Dual[AsymmetricPeerState]{ Local: &CloseErr{
looks like we need a String
for CloseErr
,
2025-03-03 09:54:19.878 [INF] PFSM: FSM(rbf_chan_closer(9626774478b3b09627527a36573dbedc50f8743ffbc29db279cfc44765d3f267:1794882728)): State transition from_state="ClosingNegotiation(local=%!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference), remote=<nil>)" to_state="ClosingNegotiation(local=LocalCloseStart, remote=<nil>)"
lnwallet/chancloser/rbf_coop_test.go
line 1956 at r7 (raw file):
// From the error state, we should be able to handle the remote party // kicking off a new iteration for a fee bump. t.Run("recv ofer restart", func(t *testing.T) {
Suggestion:
recv_offer_restart
lnwallet/chancloser/rbf_coop_transitions.go
line 584 at r2 (raw file):
// the co-op close process. switch msg := event.(type) { // Ignore any potential duplicate channel flushed events.
Q: does it mean there are some inconsistent states that caused the channel to be flushed multiple times?
lnwallet/chancloser/rbf_coop_test.go
line 1084 at r6 (raw file):
// cached. currentState := assertStateT[*ChannelFlushing](closeHarness) require.NotNil(t, currentState.EarlyRemoteOffer)
We also need to test EarlyRemoteOffer
is processed by sending a ChannelFlushed
event?
lnwallet/chancloser/rbf_coop_states.go
line 570 at r4 (raw file):
// ErrStateCantPayForFee is sent when the local party attempts a fee update // that they can't actually party for. type ErrStateCantPayForFee struct {
nit: could also add a String
like other states? Otherwise we get this long state,
2025-03-03 08:56:26.408 [INF] PFSM: FSM(rbf_chan_closer(899f309b3b672cc2423754c89125c3e3e2f8f9f6075254b0390f87db3db61548:1129365703)): State transition from_state="ClosingNegotiation(local=LocalCloseStart, remote=<nil>)" to_state="ClosingNegotiation(local=cannot pay for fee of 1 BTC, only have 0.00040000 BTC local balance, remote=<nil>)"
lnwallet/chancloser/rbf_coop_states.go
line 910 at r4 (raw file):
return true }
Q: technically when we end up here it's an error state right? Since when a remote party cannot have the event SendOfferEvent
, or a local party receives an OfferReceivedEvent
.
lnwallet/chancloser/rbf_coop_test.go
line 1610 at r4 (raw file):
}) t.Run("recv_offer_wrong_local_script", func(t *testing.T) {
nit: would be nice to have some docs describing the test like how other test cases do
lnwallet/chancloser/rbf_coop_test.go
line 1636 at r4 (raw file):
closeHarness.chanCloser.SendEvent(ctx, feeOffer) // We shouldn't have transitioned to a new state.
Q: I guess the reestablish logic is implemented in a followup PR?
lnwallet/channel.go
line 9277 at r5 (raw file):
haveLocalOutput := ourBalance >= localDust if haveLocalOutput { // If a party's script is an OP_RETURN, then we set their
Suggestion:
our
lnwallet/channel.go
line 9282 at r5 (raw file):
input.ScriptIsOpReturn(ourDeliveryScript) { ourBalance = 0
Won't this contradict the above haveLocalOutput
logic since we know it's not a dust output?
It's also weird that CreateCloseProposal
returns ourBalance
, while we set it to zero here. It looks like we should change CoopCloseBalance
to return the updated balances, maybe a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment
// EarlyRemoteOffer is the offer we received from the remote party | ||
// before we received their shutdown message. We'll stash it to process | ||
// later. | ||
EarlyRemoteOffer fn.Option[OfferReceivedEvent] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this a violation of the spec? Why should we be permissive in accepting something that we should explicitly reject?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rationale comes from the Robustness Principle: https://en.wikipedia.org/wiki/Robustness_principle. I can remove it ofc, but handling the early send case (they send their first offer before they recv our shutdown reply) can help us save an otherwise aborted flow.
Eclair had this behavior early on when we did interop. I think we need more interop hours/flows to conclude if it's safe to remove or not.
As mentioned, this was a flake in the itests (not added in this PR) that was due to some async send logic (when a send is conditional on some predicate) in the state machine executor. There're a few other options there including:
- Modify the state machine executor to do a sync test of the predicate, doing a blocking send if it's already true.
- Modify this state machine further to add an intermediate
WaitingForSend
state. Once we issue the send event, we shift into this state, then process another event once the send has finalized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if the messages were single-threaded we could not worry about this early stashing logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the lift like here? I really think it's a good idea to have it be single-threaded as I personally find it easier to reason about
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the transport layer the messages order are always guaranteed right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the lift like here? I really think it's a good idea to have it be single-threaded as I personally find it easier to reason about
They are single threaded. The exception is when you don't want a message to go out unconditionally, instead you want to wait for a condition to be upheld.
I've added a commit to do the first option.
I think it makes sense to still leave this in place, we've seen it pop up once during interop, and we still have 2 implementations to go before we finalize interop with all the major implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, ok I misunderstood how this worked.
This'll be useful to communicate what the new fee rate is to an RPC caller.
If we go to close while the channel is already flushed, we might get an extra event, so we can safely ignore it and do a self state transition.
Both these messages now carry the address of both parties, so you can update an address without needing to send shutdown again.
In this commit, we implement the latest version of the RBF loop as described in the spec. We remove the self loop back based on sending or receiving shutdown. Instead, from the ClosePending state, we can trigger a new loop by sending SendOfferEvent (we bump), or OfferReceivedEvent (they bump). We also update the rbf state machine w/ the new close addr logic. This log ensures that the remote party always sends our current address, and that if they send a new address, we'll update our view of it, and counter sign the correct transaction. We also add a CloseErr state. With this new state, we can ensure that we're able to properly report errors back to the RPC client, and also optionally force a reconnection or send a warning to the remote party.
In this commit, we implement a special case for OP_RETURN scripts outlined in the spec. If a party decides that its output will be too small even after the dust check, then they can opt to set it to zero by sending an `OP_RETURN` as their script.
In this commit, we update the RBF state machine to handle early offer cases. This can happen if after we send out shutdown (to kick things off), the remote party sends their offer early. This can also happen if their outgoing shutdown (to ACK ours) was delayed for w/e reason, and we get their offer first. The alternative was to modify the state machine itself, but we feel that handling this early case is better in line with the Robustness principle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: all files reviewed, 9 unresolved discussions (waiting on @Crypt-iQ and @yyforyongyu)
lnwallet/channel.go
line 9282 at r5 (raw file):
Previously, yyforyongyu (Yong) wrote…
Won't this contradict the above
haveLocalOutput
logic since we know it's not a dust output?It's also weird that
CreateCloseProposal
returnsourBalance
, while we set it to zero here. It looks like we should changeCoopCloseBalance
to return the updated balances, maybe a future PR.
So the rationale of this OP_RETURN
behavior is for when either party has a balance that's above "absolute dust", but considers their output uneconomical at the current fee rate. They can send an OP_RETURN
as a script to burn their entire output to fees.
There was a bit of back n forth in the spec re if this is useful/realistic, but for now it stands.
haveLocalOutput
is only used once in this function. If their output was going to be dust already, then the OP_RETURN
cut out doesn't apply.
IIUC, the balance returned from CoopCloseBalance
reflects the final balance after fees. Which is still technically accurate, but either party may wish to burn their entire balance all together. We still need that intermediate value here as we want to base the check on the balance after fees.
lnwallet/chancloser/rbf_coop_states.go
line 910 at r4 (raw file):
Previously, yyforyongyu (Yong) wrote…
Q: technically when we end up here it's an error state right? Since when a remote party cannot have the event
SendOfferEvent
, or a local party receives anOfferReceivedEvent
.
Correct, I'll add an updated state machine diagram here in a new push.
In this case the composite state adds a layer of indirection. So we'll actually error out in ClosingNegotiation.ProcessEvent
: if shouldRouteTo
for both local and remote is false, then we fall through to the bottom and return ErrInvalidStateTransition
.
lnwallet/chancloser/rbf_coop_test.go
line 1636 at r4 (raw file):
Previously, yyforyongyu (Yong) wrote…
Q: I guess the reestablish logic is implemented in a followup PR?
As in restarts?
So restarts are effectively tested here as we always assert we can do another RBF loop.
The commits in that other PR were added to address the concrete realities of restarts in the scope of the normal daemon operation. The extra details that needed to be handled there are related to:
- The link's lifetime in the switch.
- That we don't load the link into the switch again after restart.
- RPC notifications
lnwallet/chancloser/rbf_coop_test.go
line 1084 at r6 (raw file):
Previously, yyforyongyu (Yong) wrote…
We also need to test
EarlyRemoteOffer
is processed by sending aChannelFlushed
event?
Good idea, I added a test for this case.
I put it with the tests for ChannelFlushing
, as the goal is to test each state transition as transitively, wlog, testing each state transition entails testing them all composed e2e.
lnwallet/chancloser/rbf_coop_test.go
line 1918 at r7 (raw file):
Previously, yyforyongyu (Yong) wrote…
looks like we need a
String
forCloseErr
,2025-03-03 09:54:19.878 [INF] PFSM: FSM(rbf_chan_closer(9626774478b3b09627527a36573dbedc50f8743ffbc29db279cfc44765d3f267:1794882728)): State transition from_state="ClosingNegotiation(local=%!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference), remote=<nil>)" to_state="ClosingNegotiation(local=LocalCloseStart, remote=<nil>)"
Done.
lnwallet/chancloser/rbf_coop_transitions.go
line 584 at r2 (raw file):
Previously, yyforyongyu (Yong) wrote…
Q: does it mean there are some inconsistent states that caused the channel to be flushed multiple times?
TBH....this PR has been active for quite some time (see the creation date on the original 4/4). It's possible this bug was fixed in master so it's no longer needed.
lnwallet/chancloser/rbf_coop_test.go
line 1956 at r7 (raw file):
// From the error state, we should be able to handle the remote party // kicking off a new iteration for a fee bump. t.Run("recv ofer restart", func(t *testing.T) {
Done.
lnwallet/channel.go
line 9277 at r5 (raw file):
haveLocalOutput := ourBalance >= localDust if haveLocalOutput { // If a party's script is an OP_RETURN, then we set their
Done.
911bf10
to
dacc5df
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 7 of 8 files at r8, 3 of 4 files at r10, 2 of 2 files at r12, 1 of 3 files at r13, 2 of 2 files at r14, all commit messages.
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @Crypt-iQ and @Roasbeef)
lnwallet/channel.go
line 9282 at r5 (raw file):
Previously, Roasbeef (Olaoluwa Osuntokun) wrote…
So the rationale of this
OP_RETURN
behavior is for when either party has a balance that's above "absolute dust", but considers their output uneconomical at the current fee rate. They can send anOP_RETURN
as a script to burn their entire output to fees.There was a bit of back n forth in the spec re if this is useful/realistic, but for now it stands.
haveLocalOutput
is only used once in this function. If their output was going to be dust already, then theOP_RETURN
cut out doesn't apply.IIUC, the balance returned from
CoopCloseBalance
reflects the final balance after fees. Which is still technically accurate, but either party may wish to burn their entire balance all together. We still need that intermediate value here as we want to base the check on the balance after fees.
Cool gotya.
In this commit, we add an upfront check for `SendWhen` predicates before deciding to launch a goroutine. This ensures that when a message comes along that is already ready to send, we do the send in a synchronous manner.
hmm looks like a flake,
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is g2g pending flake unless we want to fix the flake in the subsequent PR
@@ -110,7 +110,10 @@ func assertStateTransitions[Event any, Env protofsm.Environment]( | |||
t.Helper() | |||
|
|||
for _, expectedState := range expectedStates { | |||
newState := <-stateSub.NewItemCreated.ChanOut() | |||
newState, err := fn.RecvOrTimeout( | |||
stateSub.NewItemCreated.ChanOut(), 10*time.Millisecond, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The flake is failing here with a timeout
Spun off of #8453.
This change is