mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-11 13:39:25 +00:00
One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++ * Initial eVRF implementation Not quite done yet. It needs to communicate the resulting points and proofs to extract them from the Pedersen Commitments in order to return those, and then be tested. * Add the openings of the PCs to the eVRF as necessary * Add implementation of secq256k1 * Make DKG Encryption a bit more flexible No longer requires the use of an EncryptionKeyMessage, and allows pre-defined keys for encryption. * Make NUM_BITS an argument for the field macro * Have the eVRF take a Zeroizing private key * Initial eVRF-based DKG * Add embedwards25519 curve * Inline the eVRF into the DKG library Due to how we're handling share encryption, we'd either need two circuits or to dedicate this circuit to the DKG. The latter makes sense at this time. * Add documentation to the eVRF-based DKG * Add paragraph claiming robustness * Update to the new eVRF proof * Finish routing the eVRF functionality Still needs errors and serialization, along with a few other TODOs. * Add initial eVRF DKG test * Improve eVRF DKG Updates how we calculcate verification shares, improves performance when extracting multiple sets of keys, and adds more to the test for it. * Start using a proper error for the eVRF DKG * Resolve various TODOs Supports recovering multiple key shares from the eVRF DKG. Inlines two loops to save 2**16 iterations. Adds support for creating a constant time representation of scalars < NUM_BITS. * Ban zero ECDH keys, document non-zero requirements * Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519 * Add Ristretto eVRF trait impls * Support participating multiple times in the eVRF DKG * Only participate once per key, not once per key share * Rewrite processor key-gen around the eVRF DKG Still a WIP. * Finish routing the new key gen in the processor Doesn't touch the tests, coordinator, nor Substrate yet. `cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor` does pass. * Deduplicate and better document in processor key_gen * Update serai-processor tests to the new key gen * Correct amount of yx coefficients, get processor key gen test to pass * Add embedded elliptic curve keys to Substrate * Update processor key gen tests to the eVRF DKG * Have set_keys take signature_participants, not removed_participants Now no one is removed from the DKG. Only `t` people publish the key however. Uses a BitVec for an efficient encoding of the participants. * Update the coordinator binary for the new DKG This does not yet update any tests. * Add sensible Debug to key_gen::[Processor, Coordinator]Message * Have the DKG explicitly declare how to interpolate its shares Removes the hack for MuSig where we multiply keys by the inverse of their lagrange interpolation factor. * Replace Interpolation::None with Interpolation::Constant Allows the MuSig DKG to keep the secret share as the original private key, enabling deriving FROST nonces consistently regardless of the MuSig context. * Get coordinator tests to pass * Update spec to the new DKG * Get clippy to pass across the repo * cargo machete * Add an extra sleep to ensure expected ordering of `Participation`s * Update orchestration * Remove bad panic in coordinator It expected ConfirmationShare to be n-of-n, not t-of-n. * Improve documentation on functions * Update TX size limit We now no longer have to support the ridiculous case of having 49 DKG participations within a 101-of-150 DKG. It does remain quite high due to needing to _sign_ so many times. It'd may be optimal for parties with multiple key shares to independently send their preprocesses/shares (despite the overhead that'll cause with signatures and the transaction structure). * Correct error in the Processor spec document * Update a few comments in the validator-sets pallet * Send/Recv Participation one at a time Sending all, then attempting to receive all in an expected order, wasn't working even with notable delays between sending messages. This points to the mempool not working as expected... * Correct ThresholdKeys serialization in modular-frost test * Updating existing TX size limit test for the new DKG parameters * Increase time allowed for the DKG on the GH CI * Correct construction of signature_participants in serai-client tests Fault identified by akil. * Further contextualize DkgConfirmer by ValidatorSet Caught by a safety check we wouldn't reuse preprocesses across messages. That raises the question of we were prior reusing preprocesses (reusing keys)? Except that'd have caused a variety of signing failures (suggesting we had some staggered timing avoiding it in practice but yes, this was possible in theory). * Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests * Correct shimmed setting of a secq256k1 key * cargo fmt * Don't use `[0; 32]` for the embedded keys in the coordinator rotation test The key_gen function expects the random values already decided. * Big-endian secq256k1 scalars Also restores the prior, safer, Encryption::register function.
This commit is contained in:
@@ -7,12 +7,8 @@ use zeroize::Zeroizing;
|
||||
use rand_core::{RngCore, CryptoRng, OsRng};
|
||||
use futures_util::{task::Poll, poll};
|
||||
|
||||
use ciphersuite::{
|
||||
group::{ff::Field, GroupEncoding},
|
||||
Ciphersuite, Ristretto,
|
||||
};
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
|
||||
|
||||
use sp_application_crypto::sr25519;
|
||||
use borsh::BorshDeserialize;
|
||||
use serai_client::{
|
||||
primitives::NetworkId,
|
||||
@@ -52,12 +48,22 @@ pub fn new_spec<R: RngCore + CryptoRng>(
|
||||
|
||||
let set = ValidatorSet { session: Session(0), network: NetworkId::Bitcoin };
|
||||
|
||||
let set_participants = keys
|
||||
let validators = keys
|
||||
.iter()
|
||||
.map(|key| (sr25519::Public((<Ristretto as Ciphersuite>::generator() * **key).to_bytes()), 1))
|
||||
.map(|key| ((<Ristretto as Ciphersuite>::generator() * **key), 1))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let res = TributarySpec::new(serai_block, start_time, set, set_participants);
|
||||
// Generate random eVRF keys as none of these test rely on them to have any structure
|
||||
let mut evrf_keys = vec![];
|
||||
for _ in 0 .. keys.len() {
|
||||
let mut substrate = [0; 32];
|
||||
OsRng.fill_bytes(&mut substrate);
|
||||
let mut network = vec![0; 64];
|
||||
OsRng.fill_bytes(&mut network);
|
||||
evrf_keys.push((substrate, network));
|
||||
}
|
||||
|
||||
let res = TributarySpec::new(serai_block, start_time, set, validators, evrf_keys);
|
||||
assert_eq!(
|
||||
TributarySpec::deserialize_reader(&mut borsh::to_vec(&res).unwrap().as_slice()).unwrap(),
|
||||
res,
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
use core::time::Duration;
|
||||
use std::collections::HashMap;
|
||||
|
||||
use zeroize::Zeroizing;
|
||||
use rand_core::{RngCore, OsRng};
|
||||
@@ -9,7 +8,7 @@ use frost::Participant;
|
||||
|
||||
use sp_runtime::traits::Verify;
|
||||
use serai_client::{
|
||||
primitives::{SeraiAddress, Signature},
|
||||
primitives::Signature,
|
||||
validator_sets::primitives::{ValidatorSet, KeyPair},
|
||||
};
|
||||
|
||||
@@ -17,10 +16,7 @@ use tokio::time::sleep;
|
||||
|
||||
use serai_db::{Get, DbTxn, Db, MemDb};
|
||||
|
||||
use processor_messages::{
|
||||
key_gen::{self, KeyGenId},
|
||||
CoordinatorMessage,
|
||||
};
|
||||
use processor_messages::{key_gen, CoordinatorMessage};
|
||||
|
||||
use tributary::{TransactionTrait, Tributary};
|
||||
|
||||
@@ -54,44 +50,41 @@ async fn dkg_test() {
|
||||
tokio::spawn(run_tributaries(tributaries.clone()));
|
||||
|
||||
let mut txs = vec![];
|
||||
// Create DKG commitments for each key
|
||||
// Create DKG participation for each key
|
||||
for key in &keys {
|
||||
let attempt = 0;
|
||||
let mut commitments = vec![0; 256];
|
||||
OsRng.fill_bytes(&mut commitments);
|
||||
let mut participation = vec![0; 4096];
|
||||
OsRng.fill_bytes(&mut participation);
|
||||
|
||||
let mut tx = Transaction::DkgCommitments {
|
||||
attempt,
|
||||
commitments: vec![commitments],
|
||||
signed: Transaction::empty_signed(),
|
||||
};
|
||||
let mut tx =
|
||||
Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() };
|
||||
tx.sign(&mut OsRng, spec.genesis(), key);
|
||||
txs.push(tx);
|
||||
}
|
||||
|
||||
let block_before_tx = tributaries[0].1.tip().await;
|
||||
|
||||
// Publish all commitments but one
|
||||
for (i, tx) in txs.iter().enumerate().skip(1) {
|
||||
// Publish t-1 participations
|
||||
let t = ((keys.len() * 2) / 3) + 1;
|
||||
for (i, tx) in txs.iter().take(t - 1).enumerate() {
|
||||
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
|
||||
}
|
||||
|
||||
// Wait until these are included
|
||||
for tx in txs.iter().skip(1) {
|
||||
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
|
||||
}
|
||||
|
||||
let expected_commitments: HashMap<_, _> = txs
|
||||
let expected_participations = txs
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(i, tx)| {
|
||||
if let Transaction::DkgCommitments { commitments, .. } = tx {
|
||||
(Participant::new((i + 1).try_into().unwrap()).unwrap(), commitments[0].clone())
|
||||
if let Transaction::DkgParticipation { participation, .. } = tx {
|
||||
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Participation {
|
||||
session: spec.set().session,
|
||||
participant: Participant::new((i + 1).try_into().unwrap()).unwrap(),
|
||||
participation: participation.clone(),
|
||||
})
|
||||
} else {
|
||||
panic!("txs had non-commitments");
|
||||
panic!("txs wasn't a DkgParticipation");
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
async fn new_processors(
|
||||
db: &mut MemDb,
|
||||
@@ -120,28 +113,30 @@ async fn dkg_test() {
|
||||
processors
|
||||
}
|
||||
|
||||
// Instantiate a scanner and verify it has nothing to report
|
||||
// Instantiate a scanner and verify it has the first two participations to report (and isn't
|
||||
// waiting for `t`)
|
||||
let processors = new_processors(&mut dbs[0], &keys[0], &spec, &tributaries[0].1).await;
|
||||
assert!(processors.0.read().await.is_empty());
|
||||
assert_eq!(processors.0.read().await.get(&spec.set().network).unwrap().len(), t - 1);
|
||||
|
||||
// Publish the last commitment
|
||||
// Publish the rest of the participations
|
||||
let block_before_tx = tributaries[0].1.tip().await;
|
||||
assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true));
|
||||
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await;
|
||||
sleep(Duration::from_secs(Tributary::<MemDb, Transaction, LocalP2p>::block_time().into())).await;
|
||||
for tx in txs.iter().skip(t - 1) {
|
||||
assert_eq!(tributaries[0].1.add_transaction(tx.clone()).await, Ok(true));
|
||||
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
|
||||
}
|
||||
|
||||
// Verify the scanner emits a KeyGen::Commitments message
|
||||
// Verify the scanner emits all KeyGen::Participations messages
|
||||
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
|
||||
&mut dbs[0],
|
||||
&keys[0],
|
||||
&|_, _, _, _| async {
|
||||
panic!("provided TX caused recognized_id to be called after Commitments")
|
||||
panic!("provided TX caused recognized_id to be called after DkgParticipation")
|
||||
},
|
||||
&processors,
|
||||
&(),
|
||||
&|_| async {
|
||||
panic!(
|
||||
"test tried to publish a new Tributary TX from handle_application_tx after Commitments"
|
||||
"test tried to publish a new Tributary TX from handle_application_tx after DkgParticipation"
|
||||
)
|
||||
},
|
||||
&spec,
|
||||
@@ -150,17 +145,11 @@ async fn dkg_test() {
|
||||
.await;
|
||||
{
|
||||
let mut msgs = processors.0.write().await;
|
||||
assert_eq!(msgs.len(), 1);
|
||||
let msgs = msgs.get_mut(&spec.set().network).unwrap();
|
||||
let mut expected_commitments = expected_commitments.clone();
|
||||
expected_commitments.remove(&Participant::new((1).try_into().unwrap()).unwrap());
|
||||
assert_eq!(
|
||||
msgs.pop_front().unwrap(),
|
||||
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
|
||||
id: KeyGenId { session: spec.set().session, attempt: 0 },
|
||||
commitments: expected_commitments
|
||||
})
|
||||
);
|
||||
assert_eq!(msgs.len(), keys.len());
|
||||
for expected in &expected_participations {
|
||||
assert_eq!(&msgs.pop_front().unwrap(), expected);
|
||||
}
|
||||
assert!(msgs.is_empty());
|
||||
}
|
||||
|
||||
@@ -168,149 +157,14 @@ async fn dkg_test() {
|
||||
for (i, key) in keys.iter().enumerate().skip(1) {
|
||||
let processors = new_processors(&mut dbs[i], key, &spec, &tributaries[i].1).await;
|
||||
let mut msgs = processors.0.write().await;
|
||||
assert_eq!(msgs.len(), 1);
|
||||
let msgs = msgs.get_mut(&spec.set().network).unwrap();
|
||||
let mut expected_commitments = expected_commitments.clone();
|
||||
expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap());
|
||||
assert_eq!(
|
||||
msgs.pop_front().unwrap(),
|
||||
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
|
||||
id: KeyGenId { session: spec.set().session, attempt: 0 },
|
||||
commitments: expected_commitments
|
||||
})
|
||||
);
|
||||
assert_eq!(msgs.len(), keys.len());
|
||||
for expected in &expected_participations {
|
||||
assert_eq!(&msgs.pop_front().unwrap(), expected);
|
||||
}
|
||||
assert!(msgs.is_empty());
|
||||
}
|
||||
|
||||
// Now do shares
|
||||
let mut txs = vec![];
|
||||
for (k, key) in keys.iter().enumerate() {
|
||||
let attempt = 0;
|
||||
|
||||
let mut shares = vec![vec![]];
|
||||
for i in 0 .. keys.len() {
|
||||
if i != k {
|
||||
let mut share = vec![0; 256];
|
||||
OsRng.fill_bytes(&mut share);
|
||||
shares.last_mut().unwrap().push(share);
|
||||
}
|
||||
}
|
||||
|
||||
let mut txn = dbs[k].txn();
|
||||
let mut tx = Transaction::DkgShares {
|
||||
attempt,
|
||||
shares,
|
||||
confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0),
|
||||
signed: Transaction::empty_signed(),
|
||||
};
|
||||
txn.commit();
|
||||
tx.sign(&mut OsRng, spec.genesis(), key);
|
||||
txs.push(tx);
|
||||
}
|
||||
|
||||
let block_before_tx = tributaries[0].1.tip().await;
|
||||
for (i, tx) in txs.iter().enumerate().skip(1) {
|
||||
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
|
||||
}
|
||||
for tx in txs.iter().skip(1) {
|
||||
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
|
||||
}
|
||||
|
||||
// With just 4 sets of shares, nothing should happen yet
|
||||
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
|
||||
&mut dbs[0],
|
||||
&keys[0],
|
||||
&|_, _, _, _| async {
|
||||
panic!("provided TX caused recognized_id to be called after some shares")
|
||||
},
|
||||
&processors,
|
||||
&(),
|
||||
&|_| async {
|
||||
panic!(
|
||||
"test tried to publish a new Tributary TX from handle_application_tx after some shares"
|
||||
)
|
||||
},
|
||||
&spec,
|
||||
&tributaries[0].1.reader(),
|
||||
)
|
||||
.await;
|
||||
assert_eq!(processors.0.read().await.len(), 1);
|
||||
assert!(processors.0.read().await[&spec.set().network].is_empty());
|
||||
|
||||
// Publish the final set of shares
|
||||
let block_before_tx = tributaries[0].1.tip().await;
|
||||
assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true));
|
||||
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await;
|
||||
sleep(Duration::from_secs(Tributary::<MemDb, Transaction, LocalP2p>::block_time().into())).await;
|
||||
|
||||
// Each scanner should emit a distinct shares message
|
||||
let shares_for = |i: usize| {
|
||||
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Shares {
|
||||
id: KeyGenId { session: spec.set().session, attempt: 0 },
|
||||
shares: vec![txs
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(l, tx)| {
|
||||
if let Transaction::DkgShares { shares, .. } = tx {
|
||||
if i == l {
|
||||
None
|
||||
} else {
|
||||
let relative_i = i - (if i > l { 1 } else { 0 });
|
||||
Some((
|
||||
Participant::new((l + 1).try_into().unwrap()).unwrap(),
|
||||
shares[0][relative_i].clone(),
|
||||
))
|
||||
}
|
||||
} else {
|
||||
panic!("txs had non-shares");
|
||||
}
|
||||
})
|
||||
.collect::<HashMap<_, _>>()],
|
||||
})
|
||||
};
|
||||
|
||||
// Any scanner which has handled the prior blocks should only emit the new event
|
||||
for (i, key) in keys.iter().enumerate() {
|
||||
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
|
||||
&mut dbs[i],
|
||||
key,
|
||||
&|_, _, _, _| async { panic!("provided TX caused recognized_id to be called after shares") },
|
||||
&processors,
|
||||
&(),
|
||||
&|_| async { panic!("test tried to publish a new Tributary TX from handle_application_tx") },
|
||||
&spec,
|
||||
&tributaries[i].1.reader(),
|
||||
)
|
||||
.await;
|
||||
{
|
||||
let mut msgs = processors.0.write().await;
|
||||
assert_eq!(msgs.len(), 1);
|
||||
let msgs = msgs.get_mut(&spec.set().network).unwrap();
|
||||
assert_eq!(msgs.pop_front().unwrap(), shares_for(i));
|
||||
assert!(msgs.is_empty());
|
||||
}
|
||||
}
|
||||
|
||||
// Yet new scanners should emit all events
|
||||
for (i, key) in keys.iter().enumerate() {
|
||||
let processors = new_processors(&mut MemDb::new(), key, &spec, &tributaries[i].1).await;
|
||||
let mut msgs = processors.0.write().await;
|
||||
assert_eq!(msgs.len(), 1);
|
||||
let msgs = msgs.get_mut(&spec.set().network).unwrap();
|
||||
let mut expected_commitments = expected_commitments.clone();
|
||||
expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap());
|
||||
assert_eq!(
|
||||
msgs.pop_front().unwrap(),
|
||||
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
|
||||
id: KeyGenId { session: spec.set().session, attempt: 0 },
|
||||
commitments: expected_commitments
|
||||
})
|
||||
);
|
||||
assert_eq!(msgs.pop_front().unwrap(), shares_for(i));
|
||||
assert!(msgs.is_empty());
|
||||
}
|
||||
|
||||
// Send DkgConfirmed
|
||||
let mut substrate_key = [0; 32];
|
||||
OsRng.fill_bytes(&mut substrate_key);
|
||||
let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()];
|
||||
@@ -319,17 +173,19 @@ async fn dkg_test() {
|
||||
|
||||
let mut txs = vec![];
|
||||
for (i, key) in keys.iter().enumerate() {
|
||||
let attempt = 0;
|
||||
let mut txn = dbs[i].txn();
|
||||
let share =
|
||||
crate::tributary::generated_key_pair::<MemDb>(&mut txn, key, &spec, &key_pair, 0).unwrap();
|
||||
txn.commit();
|
||||
|
||||
let mut tx = Transaction::DkgConfirmed {
|
||||
// Claim we've generated the key pair
|
||||
crate::tributary::generated_key_pair::<MemDb>(&mut txn, spec.genesis(), &key_pair);
|
||||
|
||||
// Publish the nonces
|
||||
let attempt = 0;
|
||||
let mut tx = Transaction::DkgConfirmationNonces {
|
||||
attempt,
|
||||
confirmation_share: share,
|
||||
confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0),
|
||||
signed: Transaction::empty_signed(),
|
||||
};
|
||||
txn.commit();
|
||||
tx.sign(&mut OsRng, spec.genesis(), key);
|
||||
txs.push(tx);
|
||||
}
|
||||
@@ -341,6 +197,35 @@ async fn dkg_test() {
|
||||
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
|
||||
}
|
||||
|
||||
// This should not cause any new processor event as the processor doesn't handle DKG confirming
|
||||
for (i, key) in keys.iter().enumerate() {
|
||||
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
|
||||
&mut dbs[i],
|
||||
key,
|
||||
&|_, _, _, _| async {
|
||||
panic!("provided TX caused recognized_id to be called after DkgConfirmationNonces")
|
||||
},
|
||||
&processors,
|
||||
&(),
|
||||
// The Tributary handler should publish ConfirmationShare itself after ConfirmationNonces
|
||||
&|tx| async { assert_eq!(tributaries[i].1.add_transaction(tx).await, Ok(true)) },
|
||||
&spec,
|
||||
&tributaries[i].1.reader(),
|
||||
)
|
||||
.await;
|
||||
{
|
||||
assert!(processors.0.read().await.get(&spec.set().network).unwrap().is_empty());
|
||||
}
|
||||
}
|
||||
|
||||
// Yet once these TXs are on-chain, the tributary should itself publish the confirmation shares
|
||||
// This means in the block after the next block, the keys should be set onto Serai
|
||||
// Sleep twice as long as two blocks, in case there's some stability issue
|
||||
sleep(Duration::from_secs(
|
||||
2 * 2 * u64::from(Tributary::<MemDb, Transaction, LocalP2p>::block_time()),
|
||||
))
|
||||
.await;
|
||||
|
||||
struct CheckPublishSetKeys {
|
||||
spec: TributarySpec,
|
||||
key_pair: KeyPair,
|
||||
@@ -351,19 +236,24 @@ async fn dkg_test() {
|
||||
&self,
|
||||
_db: &(impl Sync + Get),
|
||||
set: ValidatorSet,
|
||||
removed: Vec<SeraiAddress>,
|
||||
key_pair: KeyPair,
|
||||
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
|
||||
signature: Signature,
|
||||
) {
|
||||
assert_eq!(set, self.spec.set());
|
||||
assert!(removed.is_empty());
|
||||
assert_eq!(self.key_pair, key_pair);
|
||||
assert!(signature.verify(
|
||||
&*serai_client::validator_sets::primitives::set_keys_message(&set, &[], &key_pair),
|
||||
&*serai_client::validator_sets::primitives::set_keys_message(&set, &key_pair),
|
||||
&serai_client::Public(
|
||||
frost::dkg::musig::musig_key::<Ristretto>(
|
||||
&serai_client::validator_sets::primitives::musig_context(set),
|
||||
&self.spec.validators().into_iter().map(|(validator, _)| validator).collect::<Vec<_>>()
|
||||
&self
|
||||
.spec
|
||||
.validators()
|
||||
.into_iter()
|
||||
.zip(signature_participants)
|
||||
.filter_map(|((validator, _), included)| included.then_some(validator))
|
||||
.collect::<Vec<_>>()
|
||||
)
|
||||
.unwrap()
|
||||
.to_bytes()
|
||||
|
||||
@@ -6,7 +6,7 @@ use ciphersuite::{group::Group, Ciphersuite, Ristretto};
|
||||
|
||||
use scale::{Encode, Decode};
|
||||
use serai_client::{
|
||||
primitives::{SeraiAddress, Signature},
|
||||
primitives::Signature,
|
||||
validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet, KeyPair},
|
||||
};
|
||||
use processor_messages::coordinator::SubstrateSignableId;
|
||||
@@ -32,8 +32,8 @@ impl PublishSeraiTransaction for () {
|
||||
&self,
|
||||
_db: &(impl Sync + serai_db::Get),
|
||||
_set: ValidatorSet,
|
||||
_removed: Vec<SeraiAddress>,
|
||||
_key_pair: KeyPair,
|
||||
_signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
|
||||
_signature: Signature,
|
||||
) {
|
||||
panic!("publish_set_keys was called in test")
|
||||
@@ -84,23 +84,25 @@ fn tx_size_limit() {
|
||||
use tributary::TRANSACTION_SIZE_LIMIT;
|
||||
|
||||
let max_dkg_coefficients = (MAX_KEY_SHARES_PER_SET * 2).div_ceil(3) + 1;
|
||||
let max_key_shares_per_individual = MAX_KEY_SHARES_PER_SET - max_dkg_coefficients;
|
||||
// Handwave the DKG Commitments size as the size of the commitments to the coefficients and
|
||||
// 1024 bytes for all overhead
|
||||
let handwaved_dkg_commitments_size = (max_dkg_coefficients * MAX_KEY_LEN) + 1024;
|
||||
assert!(
|
||||
u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >=
|
||||
(handwaved_dkg_commitments_size * max_key_shares_per_individual)
|
||||
);
|
||||
// n coefficients
|
||||
// 2 ECDH values per recipient, and the encrypted share
|
||||
let elements_outside_of_proof = max_dkg_coefficients + ((2 + 1) * MAX_KEY_SHARES_PER_SET);
|
||||
// Then Pedersen Vector Commitments for each DH done, and the associated overhead in the proof
|
||||
// It's handwaved as one commitment per DH, where we do 2 per coefficient and 1 for the explicit
|
||||
// ECDHs
|
||||
let vector_commitments = (2 * max_dkg_coefficients) + (2 * MAX_KEY_SHARES_PER_SET);
|
||||
// Then we have commitments to the `t` polynomial of length 2 + 2 nc, where nc is the amount of
|
||||
// commitments
|
||||
let t_commitments = 2 + (2 * vector_commitments);
|
||||
// The remainder of the proof should be ~30 elements
|
||||
let proof_elements = 30;
|
||||
|
||||
// Encryption key, PoP (2 elements), message
|
||||
let elements_per_share = 4;
|
||||
let handwaved_dkg_shares_size =
|
||||
(elements_per_share * MAX_KEY_LEN * MAX_KEY_SHARES_PER_SET) + 1024;
|
||||
assert!(
|
||||
u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >=
|
||||
(handwaved_dkg_shares_size * max_key_shares_per_individual)
|
||||
);
|
||||
let handwaved_dkg_size =
|
||||
((elements_outside_of_proof + vector_commitments + t_commitments + proof_elements) *
|
||||
MAX_KEY_LEN) +
|
||||
1024;
|
||||
// Further scale by two in case of any errors in the above
|
||||
assert!(u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >= (2 * handwaved_dkg_size));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -143,84 +145,34 @@ fn serialize_sign_data() {
|
||||
|
||||
#[test]
|
||||
fn serialize_transaction() {
|
||||
test_read_write(&Transaction::RemoveParticipantDueToDkg {
|
||||
test_read_write(&Transaction::RemoveParticipant {
|
||||
participant: <Ristretto as Ciphersuite>::G::random(&mut OsRng),
|
||||
signed: random_signed_with_nonce(&mut OsRng, 0),
|
||||
});
|
||||
|
||||
{
|
||||
let mut commitments = vec![random_vec(&mut OsRng, 512)];
|
||||
for _ in 0 .. (OsRng.next_u64() % 100) {
|
||||
let mut temp = commitments[0].clone();
|
||||
OsRng.fill_bytes(&mut temp);
|
||||
commitments.push(temp);
|
||||
}
|
||||
test_read_write(&Transaction::DkgCommitments {
|
||||
attempt: random_u32(&mut OsRng),
|
||||
commitments,
|
||||
signed: random_signed_with_nonce(&mut OsRng, 0),
|
||||
});
|
||||
}
|
||||
test_read_write(&Transaction::DkgParticipation {
|
||||
participation: random_vec(&mut OsRng, 4096),
|
||||
signed: random_signed_with_nonce(&mut OsRng, 0),
|
||||
});
|
||||
|
||||
{
|
||||
// This supports a variable share length, and variable amount of sent shares, yet share length
|
||||
// and sent shares is expected to be constant among recipients
|
||||
let share_len = usize::try_from((OsRng.next_u64() % 512) + 1).unwrap();
|
||||
let amount_of_shares = usize::try_from((OsRng.next_u64() % 3) + 1).unwrap();
|
||||
// Create a valid vec of shares
|
||||
let mut shares = vec![];
|
||||
// Create up to 150 participants
|
||||
for _ in 0 ..= (OsRng.next_u64() % 150) {
|
||||
// Give each sender multiple shares
|
||||
let mut sender_shares = vec![];
|
||||
for _ in 0 .. amount_of_shares {
|
||||
let mut share = vec![0; share_len];
|
||||
OsRng.fill_bytes(&mut share);
|
||||
sender_shares.push(share);
|
||||
}
|
||||
shares.push(sender_shares);
|
||||
}
|
||||
test_read_write(&Transaction::DkgConfirmationNonces {
|
||||
attempt: random_u32(&mut OsRng),
|
||||
confirmation_nonces: {
|
||||
let mut nonces = [0; 64];
|
||||
OsRng.fill_bytes(&mut nonces);
|
||||
nonces
|
||||
},
|
||||
signed: random_signed_with_nonce(&mut OsRng, 0),
|
||||
});
|
||||
|
||||
test_read_write(&Transaction::DkgShares {
|
||||
attempt: random_u32(&mut OsRng),
|
||||
shares,
|
||||
confirmation_nonces: {
|
||||
let mut nonces = [0; 64];
|
||||
OsRng.fill_bytes(&mut nonces);
|
||||
nonces
|
||||
},
|
||||
signed: random_signed_with_nonce(&mut OsRng, 1),
|
||||
});
|
||||
}
|
||||
|
||||
for i in 0 .. 2 {
|
||||
test_read_write(&Transaction::InvalidDkgShare {
|
||||
attempt: random_u32(&mut OsRng),
|
||||
accuser: frost::Participant::new(
|
||||
u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1),
|
||||
)
|
||||
.unwrap(),
|
||||
faulty: frost::Participant::new(
|
||||
u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1),
|
||||
)
|
||||
.unwrap(),
|
||||
blame: if i == 0 {
|
||||
None
|
||||
} else {
|
||||
Some(random_vec(&mut OsRng, 500)).filter(|blame| !blame.is_empty())
|
||||
},
|
||||
signed: random_signed_with_nonce(&mut OsRng, 2),
|
||||
});
|
||||
}
|
||||
|
||||
test_read_write(&Transaction::DkgConfirmed {
|
||||
test_read_write(&Transaction::DkgConfirmationShare {
|
||||
attempt: random_u32(&mut OsRng),
|
||||
confirmation_share: {
|
||||
let mut share = [0; 32];
|
||||
OsRng.fill_bytes(&mut share);
|
||||
share
|
||||
},
|
||||
signed: random_signed_with_nonce(&mut OsRng, 2),
|
||||
signed: random_signed_with_nonce(&mut OsRng, 1),
|
||||
});
|
||||
|
||||
{
|
||||
|
||||
@@ -29,7 +29,7 @@ async fn sync_test() {
|
||||
let mut keys = new_keys(&mut OsRng);
|
||||
let spec = new_spec(&mut OsRng, &keys);
|
||||
// Ensure this can have a node fail
|
||||
assert!(spec.n(&[]) > spec.t());
|
||||
assert!(spec.n() > spec.t());
|
||||
|
||||
let mut tributaries = new_tributaries(&keys, &spec)
|
||||
.await
|
||||
@@ -142,7 +142,7 @@ async fn sync_test() {
|
||||
// Because only `t` validators are used in a commit, take n - t nodes offline
|
||||
// leaving only `t` nodes. Which should force it to participate in the consensus
|
||||
// of next blocks.
|
||||
let spares = usize::from(spec.n(&[]) - spec.t());
|
||||
let spares = usize::from(spec.n() - spec.t());
|
||||
for thread in p2p_threads.iter().take(spares) {
|
||||
thread.abort();
|
||||
}
|
||||
|
||||
@@ -37,15 +37,14 @@ async fn tx_test() {
|
||||
usize::try_from(OsRng.next_u64() % u64::try_from(tributaries.len()).unwrap()).unwrap();
|
||||
let key = keys[sender].clone();
|
||||
|
||||
let attempt = 0;
|
||||
let mut commitments = vec![0; 256];
|
||||
OsRng.fill_bytes(&mut commitments);
|
||||
|
||||
// Create the TX with a null signature so we can get its sig hash
|
||||
let block_before_tx = tributaries[sender].1.tip().await;
|
||||
let mut tx = Transaction::DkgCommitments {
|
||||
attempt,
|
||||
commitments: vec![commitments.clone()],
|
||||
// Create the TX with a null signature so we can get its sig hash
|
||||
let mut tx = Transaction::DkgParticipation {
|
||||
participation: {
|
||||
let mut participation = vec![0; 4096];
|
||||
OsRng.fill_bytes(&mut participation);
|
||||
participation
|
||||
},
|
||||
signed: Transaction::empty_signed(),
|
||||
};
|
||||
tx.sign(&mut OsRng, spec.genesis(), &key);
|
||||
|
||||
Reference in New Issue
Block a user