-
Notifications
You must be signed in to change notification settings - Fork 20.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: make txpool handle reorg due to setHead #19308
Conversation
var ( | ||
rem = pool.chain.GetBlock(oldHead.Hash(), oldHead.Number.Uint64()) | ||
add = pool.chain.GetBlock(newHead.Hash(), newHead.Number.Uint64()) | ||
) | ||
if rem == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no possible case where add == nil
? Wondering if it's worth having an additional sanity check here to avoid a panic in this case.
I don't have a lot of experience with the underlying GetBlock()
function, but it looks like it's possible and allowed for it to return a nil pointer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is, that's definitely an error. What happens here is that we just got a signal that the chain reorged, and it tells us the old head and the new head.
When we ask for more info about the old head, it can be nil if the underlying cause was 'setHead'. However, if the chain can't give us info aobut the new head, then something is very wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok got it, thanks for the context!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
This PR supersedes #16713 and #19216 , and resolves #19212.
It's more or less same as #19216 in the end, but with some more logging and documentation about the root cause.
It's very easy to replicate the bug thanks to the reports above,