Add a cosigning protocol to ensure finalizations are unique (#433)

* Add a function to deterministically decide which Serai blocks should be co-signed

Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.

* Get all active tributaries we're in at a specific block

* Add and route CosignSubstrateBlock, a new provided TX

* Split queued cosigns per network

* Rename BatchSignId to SubstrateSignId

* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it

* Handle the CosignSubstrateBlock provided TX

* Revert substrate_signer.rs to develop (and patch to still work)

Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.

* Route cosigning through the processor

* Add note to rename SubstrateSigner post-PR

I don't want to do so now in order to preserve the diff's clarity.

* Implement cosign evaluation into the coordinator

* Get tests to compile

* Bug fixes, mark blocks without cosigners available as cosigned

* Correct the ID Batch preprocesses are saved under, add log statements

* Create a dedicated function to handle cosigns

* Correct the flow around Batch verification/queueing

Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).

Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.

When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.

This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.

Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.

* Working full-stack tests

After the last commit, this only required extending a timeout.

* Replace "co-sign" with "cosign" to make finding text easier

* Update the coordinator tests to support cosigning

* Inline prior_batch calculation to prevent panic on rotation

Noticed when doing a final review of the branch.
This commit is contained in:
Luke Parker
2023-11-15 16:57:21 -05:00
committed by GitHub
parent 79e4cce2f6
commit 96f1d26f7a
29 changed files with 1900 additions and 348 deletions

View File

@@ -26,10 +26,10 @@ pub(crate) async fn recv_batch_preprocesses(
substrate_key: &[u8; 32],
batch: &Batch,
attempt: u32,
) -> (BatchSignId, HashMap<Participant, Vec<u8>>) {
let id = BatchSignId {
) -> (SubstrateSignId, HashMap<Participant, Vec<u8>>) {
let id = SubstrateSignId {
key: *substrate_key,
id: (batch.network, batch.id).encode().try_into().unwrap(),
id: SubstrateSignableId::Batch((batch.network, batch.id).encode().try_into().unwrap()),
attempt,
};
@@ -86,7 +86,7 @@ pub(crate) async fn recv_batch_preprocesses(
pub(crate) async fn sign_batch(
coordinators: &mut [Coordinator],
key: [u8; 32],
id: BatchSignId,
id: SubstrateSignId,
preprocesses: HashMap<Participant, Vec<u8>>,
) -> SignedBatch {
assert_eq!(preprocesses.len(), THRESHOLD);
@@ -96,7 +96,7 @@ pub(crate) async fn sign_batch(
if preprocesses.contains_key(&i) {
coordinator
.send_message(messages::coordinator::CoordinatorMessage::BatchPreprocesses {
.send_message(messages::coordinator::CoordinatorMessage::SubstratePreprocesses {
id: id.clone(),
preprocesses: clone_without(&preprocesses, &i),
})
@@ -111,7 +111,7 @@ pub(crate) async fn sign_batch(
if preprocesses.contains_key(&i) {
match coordinator.recv_message().await {
messages::ProcessorMessage::Coordinator(
messages::coordinator::ProcessorMessage::BatchShare {
messages::coordinator::ProcessorMessage::SubstrateShare {
id: this_id,
shares: mut these_shares,
},
@@ -130,7 +130,7 @@ pub(crate) async fn sign_batch(
if preprocesses.contains_key(&i) {
coordinator
.send_message(messages::coordinator::CoordinatorMessage::BatchShares {
.send_message(messages::coordinator::CoordinatorMessage::SubstrateShares {
id: id.clone(),
shares: clone_without(&shares, &i),
})