-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement MNAUTH and allow unlimited inbound MN connections #2790
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few things:
- let's keep
Hash
suffix to avoid confusion (I pointed just at few places the issue occurs, not to be committed from suggestions directly); - probably a copy/paste mistake when filling
pubKeys
set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
We will later need the proRegTxHash
Having serialization and deserialization in the same specialized template results in compilation failures due to the "if(for_read)" branch.
This allows masternodes to authenticate themself.
Give fresh connections some time to do the VERSION/VERACK handshake and an optional MNAUTH when it's a masternode. When an MNAUTH happened, the incoming connection is then forever protected against eviction. If a timeout of 1 second occurs or the first message after VERACK is not MNAUTH, the node is not protected anymore and becomes eligable for eviction.
… same one Now that incoming connections from MNs authenticate them self, we can avoid connecting to the same MNs through intra-quorum connections.
Rebased on develop to fix merge conflicts. Needs re-ACK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re-utACK
return; | ||
} | ||
|
||
if (strCommand == NetMsgType::MNAUTH) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why isn't it assumed this is the case? I would think this method is only called when that check is true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, it's called for every incoming message. This follows the convention that we use for all Dash specific message handling. For example, everything related to DKGs is handled in CDKGSessionManager::ProcessMessage
and it by itself checks which message it actually is. This might look a bit strange in the cases where it's only one message that we're interested in, but I prefer to use the same style/convention for all cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
…ocessMessage We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result, even though the order of messages sent by this node was completely fine. To fix this the logic for tracking the first message after `VERACK` was moved into ProcessMessage. This partially reverts dashpay#2790
…ocessMessage We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result even though the order of messages sent by this node was completely fine. To fix this I partially reverted dashpay#2790 (where the issue was initially introduced) and moved the logic for tracking the first message into ProcessMessage instead.
We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result even though the order of messages sent by this node was completely fine. To fix this I partially reverted dashpay#2790 (where the issue was initially introduced) and moved the logic for tracking the first message into ProcessMessage instead.
We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result even though the order of messages sent by this node was completely fine. To fix this I partially reverted dashpay#2790 (where the issue was initially introduced) and moved the logic for tracking the first message into ProcessMessage instead.
We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result even though the order of messages sent by this node was completely fine. To fix this I partially reverted #2790 (where the issue was initially introduced) and moved the logic for tracking the first message into ProcessMessage instead.
We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result even though the order of messages sent by this node was completely fine. To fix this I partially reverted dashpay#2790 (where the issue was initially introduced) and moved the logic for tracking the first message into ProcessMessage instead.
We could be reading multiple messages from a socket buffer at once _without actually processing them yet_ which means that `fSuccessfullyConnected` might not be switched to `true` at the time we already parsed `VERACK` message and started to parse the next one. This is basically a false positive and we drop a legit node as a result even though the order of messages sent by this node was completely fine. To fix this I partially reverted dashpay#2790 (where the issue was initially introduced) and moved the logic for tracking the first message into ProcessMessage instead. Signed-off-by: cevap <[email protected]>
This implements MNAUTH, which allows us to verify that an incoming connection is indeed from another MN. This in turn allows us to exclude inbound MN connections from limit checks and at the same time protect these connections from eviction.
This is required to ensure that intra-quorum connections stay active as long as they are needed and never get evicted due to MNs being spammed with inbound connections.
The reason we can exclude authenticated MN connections from eviction is because these are naturally limited. MNAUTH will drop any previous connections that were authenticated with the same MN, which means that each MN can cause only one incoming authenticated (non-evictable) connection.
MNAUTH is also sent/verified for outgoing connections, this is however not of interest when it comes to the inbound connection eviction and DoS (or network partition) protection. It is however useful to avoid unnecessarily connecting to a MN that already connected to us when ensuring intra-quorum connections.