Don't disconnect masternode connections when we have less then the desired amount of outbound nodes #3255
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This avoids situations where we end up disconnecting (nearly) all outgoing peers when masternode connection cleanup is happening.
This also indirectly fixes test failures like this, where cleanup happened directly after a block was requested via
getdata
.The real reason for this test failure is actually a different one, but I was not able to pin it down. What happens is that node4 gets a header announced from peer=1. Node4 then reacts with a
getdata
to retrieve the block. Shortly after that, peer=1 is disconnected due to the masternode connection cleanup. Approximately at the same time node4 receives acmpctblock
from peer=0, which is however ignored due to this code (FillBlock
fails). It gets into this code becauseFinalizeNode
was not called at this point, so that the block is considered as in-flight.I would expect that even after peer=1 is disconnect,
FindNextBlocksToDownload
would realize that the block in not in-flight anymore and needs to be re-requested from another node. But for some reason this is not happening. I was not able to reproduce this locally. As I'm unable to fix the actual bug, I decided to implement a fix that avoids the disconnect in this situation. This fix is also useful on its own as I believe.