One Round DKG (#589)

* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
This commit is contained in:
Luke Parker
2024-08-16 11:26:07 -07:00
parent 669b2fef72
commit e4e4245ee3
121 changed files with 10388 additions and 2480 deletions

View File

@@ -1,18 +1,20 @@
use std::collections::HashMap;
use std::{
io,
collections::{HashSet, HashMap},
};
use zeroize::Zeroizing;
use rand_core::SeedableRng;
use rand_core::{RngCore, SeedableRng, OsRng};
use rand_chacha::ChaCha20Rng;
use blake2::{Digest, Blake2s256};
use transcript::{Transcript, RecommendedTranscript};
use ciphersuite::group::GroupEncoding;
use frost::{
curve::{Ciphersuite, Ristretto},
dkg::{
DkgError, Participant, ThresholdParams, ThresholdCore, ThresholdKeys, encryption::*, pedpop::*,
},
use ciphersuite::{
group::{Group, GroupEncoding},
Ciphersuite, Ristretto,
};
use dkg::{Participant, ThresholdCore, ThresholdKeys, evrf::*};
use log::info;
@@ -21,6 +23,48 @@ use messages::key_gen::*;
use crate::{Get, DbTxn, Db, create_db, networks::Network};
mod generators {
use core::any::{TypeId, Any};
use std::{
sync::{LazyLock, Mutex},
collections::HashMap,
};
use frost::dkg::evrf::*;
use serai_client::validator_sets::primitives::MAX_KEY_SHARES_PER_SET;
/// A cache of the generators used by the eVRF DKG.
///
/// This performs a lookup of the Ciphersuite to its generators. Since the Ciphersuite is a
/// generic, this takes advantage of `Any`. This static is isolated in a module to ensure
/// correctness can be evaluated solely by reviewing these few lines of code.
///
/// This is arguably over-engineered as of right now, as we only need generators for Ristretto
/// and N::Curve. By having this HashMap, we enable de-duplication of the Ristretto == N::Curve
/// case, and we automatically support the n-curve case (rather than hard-coding to the 2-curve
/// case).
static GENERATORS: LazyLock<Mutex<HashMap<TypeId, &'static (dyn Send + Sync + Any)>>> =
LazyLock::new(|| Mutex::new(HashMap::new()));
pub(crate) fn generators<C: EvrfCurve>() -> &'static EvrfGenerators<C> {
GENERATORS
.lock()
.unwrap()
.entry(TypeId::of::<C>())
.or_insert_with(|| {
// If we haven't prior needed generators for this Ciphersuite, generate new ones
Box::leak(Box::new(EvrfGenerators::<C>::new(
((MAX_KEY_SHARES_PER_SET * 2 / 3) + 1).try_into().unwrap(),
MAX_KEY_SHARES_PER_SET.try_into().unwrap(),
)))
})
.downcast_ref()
.unwrap()
}
}
use generators::generators;
#[derive(Debug)]
pub struct KeyConfirmed<C: Ciphersuite> {
pub substrate_keys: Vec<ThresholdKeys<Ristretto>>,
@@ -29,15 +73,18 @@ pub struct KeyConfirmed<C: Ciphersuite> {
create_db!(
KeyGenDb {
ParamsDb: (session: &Session, attempt: u32) -> (ThresholdParams, u16),
// Not scoped to the set since that'd have latter attempts overwrite former
// A former attempt may become the finalized attempt, even if it doesn't in a timely manner
// Overwriting its commitments would be accordingly poor
CommitmentsDb: (key: &KeyGenId) -> HashMap<Participant, Vec<u8>>,
GeneratedKeysDb: (session: &Session, substrate_key: &[u8; 32], network_key: &[u8]) -> Vec<u8>,
// These do assume a key is only used once across sets, which holds true so long as a single
// participant is honest in their execution of the protocol
KeysDb: (network_key: &[u8]) -> Vec<u8>,
ParamsDb: (session: &Session) -> (u16, Vec<[u8; 32]>, Vec<Vec<u8>>),
ParticipationDb: (session: &Session) -> (
HashMap<Participant, Vec<u8>>,
HashMap<Participant, Vec<u8>>,
),
// GeneratedKeysDb, KeysDb use `()` for their value as we manually serialize their values
// TODO: Don't do that
GeneratedKeysDb: (session: &Session) -> (),
// These do assume a key is only used once across sets, which holds true if the threshold is
// honest
// TODO: Remove this assumption
KeysDb: (network_key: &[u8]) -> (),
SessionDb: (network_key: &[u8]) -> Session,
NetworkKeyDb: (session: Session) -> Vec<u8>,
}
@@ -65,8 +112,8 @@ impl GeneratedKeysDb {
fn save_keys<N: Network>(
txn: &mut impl DbTxn,
id: &KeyGenId,
substrate_keys: &[ThresholdCore<Ristretto>],
session: &Session,
substrate_keys: &[ThresholdKeys<Ristretto>],
network_keys: &[ThresholdKeys<N::Curve>],
) {
let mut keys = Zeroizing::new(vec![]);
@@ -74,14 +121,7 @@ impl GeneratedKeysDb {
keys.extend(substrate_keys.serialize().as_slice());
keys.extend(network_keys.serialize().as_slice());
}
txn.put(
Self::key(
&id.session,
&substrate_keys[0].group_key().to_bytes(),
network_keys[0].group_key().to_bytes().as_ref(),
),
keys,
);
txn.put(Self::key(session), keys);
}
}
@@ -91,11 +131,8 @@ impl KeysDb {
session: Session,
key_pair: &KeyPair,
) -> (Vec<ThresholdKeys<Ristretto>>, Vec<ThresholdKeys<N::Curve>>) {
let (keys_vec, keys) = GeneratedKeysDb::read_keys::<N>(
txn,
&GeneratedKeysDb::key(&session, &key_pair.0 .0, key_pair.1.as_ref()),
)
.unwrap();
let (keys_vec, keys) =
GeneratedKeysDb::read_keys::<N>(txn, &GeneratedKeysDb::key(&session)).unwrap();
assert_eq!(key_pair.0 .0, keys.0[0].group_key().to_bytes());
assert_eq!(
{
@@ -130,32 +167,105 @@ impl KeysDb {
}
}
type SecretShareMachines<N> =
Vec<(SecretShareMachine<Ristretto>, SecretShareMachine<<N as Network>::Curve>)>;
type KeyMachines<N> = Vec<(KeyMachine<Ristretto>, KeyMachine<<N as Network>::Curve>)>;
/*
On the Serai blockchain, users specify their public keys on the embedded curves. Substrate does
not have the libraries for the embedded curves and is unable to evaluate if the keys are valid
or not.
We could add the libraries for the embedded curves to the blockchain, yet this would be a
non-trivial scope for what's effectively an embedded context. It'd also permanently bind our
consensus to these arbitrary curves. We would have the benefit of being able to also require PoKs
for the keys, ensuring no one uses someone else's key (creating oddities there). Since someone
who uses someone else's key can't actually participate, all it does in effect is give more key
shares to the holder of the private key, and make us unable to rely on eVRF keys as a secure way
to index validators (hence the usage of `Participant` throughout the messages here).
We could remove invalid keys from the DKG, yet this would create a view of the DKG only the
processor (which does have the embedded curves) has. We'd need to reconcile it with the view of
the DKG which does include all keys (even the invalid keys).
The easiest solution is to keep the views consistent by replacing invalid keys with valid keys
(which no one has the private key for). This keeps the view consistent. This does prevent those
who posted invalid keys from participating, and receiving their keys, which is the understood and
declared effect of them posting invalid keys. Since at least `t` people must honestly participate
for the DKG to complete, and since their honest participation means they had valid keys, we do
ensure at least `t` people participated and the DKG result can be reconstructed.
We do lose fault tolerance, yet only by losing those faulty. Accordingly, this is accepted.
Returns the coerced keys and faulty participants.
*/
fn coerce_keys<C: EvrfCurve>(
key_bytes: &[impl AsRef<[u8]>],
) -> (Vec<<C::EmbeddedCurve as Ciphersuite>::G>, Vec<Participant>) {
fn evrf_key<C: EvrfCurve>(key: &[u8]) -> Option<<C::EmbeddedCurve as Ciphersuite>::G> {
let mut repr = <<C::EmbeddedCurve as Ciphersuite>::G as GroupEncoding>::Repr::default();
if repr.as_ref().len() != key.len() {
None?;
}
repr.as_mut().copy_from_slice(key);
let point = Option::<<C::EmbeddedCurve as Ciphersuite>::G>::from(<_>::from_bytes(&repr))?;
if bool::from(point.is_identity()) {
None?;
}
Some(point)
}
let mut keys = Vec::with_capacity(key_bytes.len());
let mut faulty = vec![];
for (i, key) in key_bytes.iter().enumerate() {
let i = Participant::new(
1 + u16::try_from(i).expect("performing a key gen with more than u16::MAX participants"),
)
.unwrap();
keys.push(match evrf_key::<C>(key.as_ref()) {
Some(key) => key,
None => {
// Mark this participant faulty
faulty.push(i);
// Generate a random key
let mut rng = ChaCha20Rng::from_seed(Blake2s256::digest(key).into());
loop {
let mut repr = <<C::EmbeddedCurve as Ciphersuite>::G as GroupEncoding>::Repr::default();
rng.fill_bytes(repr.as_mut());
if let Some(key) =
Option::<<C::EmbeddedCurve as Ciphersuite>::G>::from(<_>::from_bytes(&repr))
{
break key;
}
}
}
});
}
(keys, faulty)
}
#[derive(Debug)]
pub struct KeyGen<N: Network, D: Db> {
db: D,
entropy: Zeroizing<[u8; 32]>,
active_commit: HashMap<Session, (SecretShareMachines<N>, Vec<Vec<u8>>)>,
#[allow(clippy::type_complexity)]
active_share: HashMap<Session, (KeyMachines<N>, Vec<HashMap<Participant, Vec<u8>>>)>,
substrate_evrf_private_key:
Zeroizing<<<Ristretto as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F>,
network_evrf_private_key: Zeroizing<<<N::Curve as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F>,
}
impl<N: Network, D: Db> KeyGen<N, D> {
#[allow(clippy::new_ret_no_self)]
pub fn new(db: D, entropy: Zeroizing<[u8; 32]>) -> KeyGen<N, D> {
KeyGen { db, entropy, active_commit: HashMap::new(), active_share: HashMap::new() }
pub fn new(
db: D,
substrate_evrf_private_key: Zeroizing<
<<Ristretto as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F,
>,
network_evrf_private_key: Zeroizing<<<N::Curve as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F>,
) -> KeyGen<N, D> {
KeyGen { db, substrate_evrf_private_key, network_evrf_private_key }
}
pub fn in_set(&self, session: &Session) -> bool {
// We determine if we're in set using if we have the parameters for a session's key generation
// The usage of 0 for the attempt is valid so long as we aren't malicious and accordingly
// aren't fatally slashed
// TODO: Revisit once we do DKG removals for being offline
ParamsDb::get(&self.db, session, 0).is_some()
// We only have these if we were told to generate a key for this session
ParamsDb::get(&self.db, session).is_some()
}
#[allow(clippy::type_complexity)]
@@ -179,406 +289,351 @@ impl<N: Network, D: Db> KeyGen<N, D> {
&mut self,
txn: &mut D::Transaction<'_>,
msg: CoordinatorMessage,
) -> ProcessorMessage {
const SUBSTRATE_KEY_CONTEXT: &str = "substrate";
const NETWORK_KEY_CONTEXT: &str = "network";
let context = |id: &KeyGenId, key| {
) -> Vec<ProcessorMessage> {
const SUBSTRATE_KEY_CONTEXT: &[u8] = b"substrate";
const NETWORK_KEY_CONTEXT: &[u8] = b"network";
fn context<N: Network>(session: Session, key_context: &[u8]) -> [u8; 32] {
// TODO2: Also embed the chain ID/genesis block
format!(
"Serai Key Gen. Session: {:?}, Network: {:?}, Attempt: {}, Key: {}",
id.session,
N::NETWORK,
id.attempt,
key,
)
};
let rng = |label, id: KeyGenId| {
let mut transcript = RecommendedTranscript::new(label);
transcript.append_message(b"entropy", &self.entropy);
transcript.append_message(b"context", context(&id, "rng"));
ChaCha20Rng::from_seed(transcript.rng_seed(b"rng"))
};
let coefficients_rng = |id| rng(b"Key Gen Coefficients", id);
let secret_shares_rng = |id| rng(b"Key Gen Secret Shares", id);
let share_rng = |id| rng(b"Key Gen Share", id);
let key_gen_machines = |id, params: ThresholdParams, shares| {
let mut rng = coefficients_rng(id);
let mut machines = vec![];
let mut commitments = vec![];
for s in 0 .. shares {
let params = ThresholdParams::new(
params.t(),
params.n(),
Participant::new(u16::from(params.i()) + s).unwrap(),
)
.unwrap();
let substrate = KeyGenMachine::new(params, context(&id, SUBSTRATE_KEY_CONTEXT))
.generate_coefficients(&mut rng);
let network = KeyGenMachine::new(params, context(&id, NETWORK_KEY_CONTEXT))
.generate_coefficients(&mut rng);
machines.push((substrate.0, network.0));
let mut serialized = vec![];
substrate.1.write(&mut serialized).unwrap();
network.1.write(&mut serialized).unwrap();
commitments.push(serialized);
}
(machines, commitments)
};
let secret_share_machines = |id,
params: ThresholdParams,
machines: SecretShareMachines<N>,
commitments: HashMap<Participant, Vec<u8>>|
-> Result<_, ProcessorMessage> {
let mut rng = secret_shares_rng(id);
#[allow(clippy::type_complexity)]
fn handle_machine<C: Ciphersuite>(
rng: &mut ChaCha20Rng,
id: KeyGenId,
machine: SecretShareMachine<C>,
commitments: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
) -> Result<
(KeyMachine<C>, HashMap<Participant, EncryptedMessage<C, SecretShare<C::F>>>),
ProcessorMessage,
> {
match machine.generate_secret_shares(rng, commitments) {
Ok(res) => Ok(res),
Err(e) => match e {
DkgError::ZeroParameter(_, _) |
DkgError::InvalidThreshold(_, _) |
DkgError::InvalidParticipant(_, _) |
DkgError::InvalidSigningSet |
DkgError::InvalidShare { .. } => unreachable!("{e:?}"),
DkgError::InvalidParticipantQuantity(_, _) |
DkgError::DuplicatedParticipant(_) |
DkgError::MissingParticipant(_) => {
panic!("coordinator sent invalid DKG commitments: {e:?}")
}
DkgError::InvalidCommitments(i) => {
Err(ProcessorMessage::InvalidCommitments { id, faulty: i })?
}
},
}
}
let mut substrate_commitments = HashMap::new();
let mut network_commitments = HashMap::new();
for i in 1 ..= params.n() {
let i = Participant::new(i).unwrap();
let mut commitments = commitments[&i].as_slice();
substrate_commitments.insert(
i,
EncryptionKeyMessage::<Ristretto, Commitments<Ristretto>>::read(&mut commitments, params)
.map_err(|_| ProcessorMessage::InvalidCommitments { id, faulty: i })?,
);
network_commitments.insert(
i,
EncryptionKeyMessage::<N::Curve, Commitments<N::Curve>>::read(&mut commitments, params)
.map_err(|_| ProcessorMessage::InvalidCommitments { id, faulty: i })?,
);
if !commitments.is_empty() {
// Malicious Participant included extra bytes in their commitments
// (a potential DoS attack)
Err(ProcessorMessage::InvalidCommitments { id, faulty: i })?;
}
}
let mut key_machines = vec![];
let mut shares = vec![];
for (m, (substrate_machine, network_machine)) in machines.into_iter().enumerate() {
let actual_i = Participant::new(u16::from(params.i()) + u16::try_from(m).unwrap()).unwrap();
let mut substrate_commitments = substrate_commitments.clone();
substrate_commitments.remove(&actual_i);
let (substrate_machine, mut substrate_shares) =
handle_machine::<Ristretto>(&mut rng, id, substrate_machine, substrate_commitments)?;
let mut network_commitments = network_commitments.clone();
network_commitments.remove(&actual_i);
let (network_machine, network_shares) =
handle_machine(&mut rng, id, network_machine, network_commitments.clone())?;
key_machines.push((substrate_machine, network_machine));
let mut these_shares: HashMap<_, _> =
substrate_shares.drain().map(|(i, share)| (i, share.serialize())).collect();
for (i, share) in &mut these_shares {
share.extend(network_shares[i].serialize());
}
shares.push(these_shares);
}
Ok((key_machines, shares))
};
let mut transcript = RecommendedTranscript::new(b"Serai eVRF Key Gen");
transcript.append_message(b"network", N::ID);
transcript.append_message(b"session", session.0.to_le_bytes());
transcript.append_message(b"key", key_context);
(&(&transcript.challenge(b"context"))[.. 32]).try_into().unwrap()
}
match msg {
CoordinatorMessage::GenerateKey { id, params, shares } => {
info!("Generating new key. ID: {id:?} Params: {params:?} Shares: {shares}");
CoordinatorMessage::GenerateKey { session, threshold, evrf_public_keys } => {
info!("Generating new key. Session: {session:?}");
// Remove old attempts
if self.active_commit.remove(&id.session).is_none() &&
self.active_share.remove(&id.session).is_none()
// Unzip the vector of eVRF keys
let substrate_evrf_public_keys =
evrf_public_keys.iter().map(|(key, _)| *key).collect::<Vec<_>>();
let network_evrf_public_keys =
evrf_public_keys.into_iter().map(|(_, key)| key).collect::<Vec<_>>();
let mut participation = Vec::with_capacity(2048);
let mut faulty = HashSet::new();
// Participate for both Substrate and the network
fn participate<C: EvrfCurve>(
context: [u8; 32],
threshold: u16,
evrf_public_keys: &[impl AsRef<[u8]>],
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
faulty: &mut HashSet<Participant>,
output: &mut impl io::Write,
) {
let (coerced_keys, faulty_is) = coerce_keys::<C>(evrf_public_keys);
for faulty_i in faulty_is {
faulty.insert(faulty_i);
}
let participation = EvrfDkg::<C>::participate(
&mut OsRng,
generators(),
context,
threshold,
&coerced_keys,
evrf_private_key,
);
participation.unwrap().write(output).unwrap();
}
participate::<Ristretto>(
context::<N>(session, SUBSTRATE_KEY_CONTEXT),
threshold,
&substrate_evrf_public_keys,
&self.substrate_evrf_private_key,
&mut faulty,
&mut participation,
);
participate::<N::Curve>(
context::<N>(session, NETWORK_KEY_CONTEXT),
threshold,
&network_evrf_public_keys,
&self.network_evrf_private_key,
&mut faulty,
&mut participation,
);
// Save the params
ParamsDb::set(
txn,
&session,
&(threshold, substrate_evrf_public_keys, network_evrf_public_keys),
);
// Send back our Participation and all faulty parties
let mut faulty = faulty.into_iter().collect::<Vec<_>>();
faulty.sort();
let mut res = Vec::with_capacity(faulty.len() + 1);
for faulty in faulty {
res.push(ProcessorMessage::Blame { session, participant: faulty });
}
res.push(ProcessorMessage::Participation { session, participation });
res
}
CoordinatorMessage::Participation { session, participant, participation } => {
info!("received participation from {:?} for {:?}", participant, session);
let (threshold, substrate_evrf_public_keys, network_evrf_public_keys) =
ParamsDb::get(txn, &session).unwrap();
let n = substrate_evrf_public_keys
.len()
.try_into()
.expect("performing a key gen with more than u16::MAX participants");
// Read these `Participation`s
// If they fail basic sanity checks, fail fast
let (substrate_participation, network_participation) = {
let network_participation_start_pos = {
let mut participation = participation.as_slice();
let start_len = participation.len();
let blame = vec![ProcessorMessage::Blame { session, participant }];
let Ok(substrate_participation) =
Participation::<Ristretto>::read(&mut participation, n)
else {
return blame;
};
let len_at_network_participation_start_pos = participation.len();
let Ok(network_participation) = Participation::<N::Curve>::read(&mut participation, n)
else {
return blame;
};
// If they added random noise after their participations, they're faulty
// This prevents DoS by causing a slash upon such spam
if !participation.is_empty() {
return blame;
}
// If we've already generated these keys, we don't actually need to save these
// participations and continue. We solely have to verify them, as to identify malicious
// participants and prevent DoSs, before returning
if txn.get(GeneratedKeysDb::key(&session)).is_some() {
info!("already finished generating a key for {:?}", session);
match EvrfDkg::<Ristretto>::verify(
&mut OsRng,
generators(),
context::<N>(session, SUBSTRATE_KEY_CONTEXT),
threshold,
// Ignores the list of participants who were faulty, as they were prior blamed
&coerce_keys::<Ristretto>(&substrate_evrf_public_keys).0,
&HashMap::from([(participant, substrate_participation)]),
)
.unwrap()
{
VerifyResult::Valid(_) | VerifyResult::NotEnoughParticipants => {}
VerifyResult::Invalid(faulty) => {
assert_eq!(faulty, vec![participant]);
return vec![ProcessorMessage::Blame { session, participant }];
}
}
match EvrfDkg::<N::Curve>::verify(
&mut OsRng,
generators(),
context::<N>(session, NETWORK_KEY_CONTEXT),
threshold,
// Ignores the list of participants who were faulty, as they were prior blamed
&coerce_keys::<N::Curve>(&network_evrf_public_keys).0,
&HashMap::from([(participant, network_participation)]),
)
.unwrap()
{
VerifyResult::Valid(_) | VerifyResult::NotEnoughParticipants => return vec![],
VerifyResult::Invalid(faulty) => {
assert_eq!(faulty, vec![participant]);
return vec![ProcessorMessage::Blame { session, participant }];
}
}
}
// Return the position the network participation starts at
start_len - len_at_network_participation_start_pos
};
// Instead of re-serializing the `Participation`s we read, we just use the relevant
// sections of the existing byte buffer
(
participation[.. network_participation_start_pos].to_vec(),
participation[network_participation_start_pos ..].to_vec(),
)
};
// Since these are valid `Participation`s, save them
let (mut substrate_participations, mut network_participations) =
ParticipationDb::get(txn, &session)
.unwrap_or((HashMap::with_capacity(1), HashMap::with_capacity(1)));
assert!(
substrate_participations.insert(participant, substrate_participation).is_none() &&
network_participations.insert(participant, network_participation).is_none(),
"received participation for someone multiple times"
);
ParticipationDb::set(
txn,
&session,
&(substrate_participations.clone(), network_participations.clone()),
);
// This block is taken from the eVRF DKG itself to evaluate the amount participating
{
// If we haven't handled this session before, save the params
ParamsDb::set(txn, &id.session, id.attempt, &(params, shares));
}
let mut participating_weight = 0;
// This uses the Substrate maps as the maps are kept in synchrony
let mut evrf_public_keys_mut = substrate_evrf_public_keys.clone();
for i in substrate_participations.keys() {
let evrf_public_key = substrate_evrf_public_keys[usize::from(u16::from(*i)) - 1];
let (machines, commitments) = key_gen_machines(id, params, shares);
self.active_commit.insert(id.session, (machines, commitments.clone()));
// Remove this key from the Vec to prevent double-counting
/*
Double-counting would be a risk if multiple participants shared an eVRF public key
and participated. This code does still allow such participants (in order to let
participants be weighted), and any one of them participating will count as all
participating. This is fine as any one such participant will be able to decrypt
the shares for themselves and all other participants, so this is still a key
generated by an amount of participants who could simply reconstruct the key.
*/
let start_len = evrf_public_keys_mut.len();
evrf_public_keys_mut.retain(|key| *key != evrf_public_key);
let end_len = evrf_public_keys_mut.len();
let count = start_len - end_len;
ProcessorMessage::Commitments { id, commitments }
}
CoordinatorMessage::Commitments { id, mut commitments } => {
info!("Received commitments for {:?}", id);
if self.active_share.contains_key(&id.session) {
// We should've been told of a new attempt before receiving commitments again
// The coordinator is either missing messages or repeating itself
// Either way, it's faulty
panic!("commitments when already handled commitments");
}
let (params, share_quantity) = ParamsDb::get(txn, &id.session, id.attempt).unwrap();
// Unwrap the machines, rebuilding them if we didn't have them in our cache
// We won't if the processor rebooted
// This *may* be inconsistent if we receive a KeyGen for attempt x, then commitments for
// attempt y
// The coordinator is trusted to be proper in this regard
let (prior, our_commitments) = self
.active_commit
.remove(&id.session)
.unwrap_or_else(|| key_gen_machines(id, params, share_quantity));
for (i, our_commitments) in our_commitments.into_iter().enumerate() {
assert!(commitments
.insert(
Participant::new(u16::from(params.i()) + u16::try_from(i).unwrap()).unwrap(),
our_commitments,
)
.is_none());
}
CommitmentsDb::set(txn, &id, &commitments);
match secret_share_machines(id, params, prior, commitments) {
Ok((machines, shares)) => {
self.active_share.insert(id.session, (machines, shares.clone()));
ProcessorMessage::Shares { id, shares }
participating_weight += count;
}
Err(e) => e,
}
}
CoordinatorMessage::Shares { id, shares } => {
info!("Received shares for {:?}", id);
let (params, share_quantity) = ParamsDb::get(txn, &id.session, id.attempt).unwrap();
// Same commentary on inconsistency as above exists
let (machines, our_shares) = self.active_share.remove(&id.session).unwrap_or_else(|| {
let prior = key_gen_machines(id, params, share_quantity).0;
let (machines, shares) =
secret_share_machines(id, params, prior, CommitmentsDb::get(txn, &id).unwrap())
.expect("got Shares for a key gen which faulted");
(machines, shares)
});
let mut rng = share_rng(id);
fn handle_machine<C: Ciphersuite>(
rng: &mut ChaCha20Rng,
id: KeyGenId,
// These are the params of our first share, not this machine's shares
params: ThresholdParams,
m: usize,
machine: KeyMachine<C>,
shares_ref: &mut HashMap<Participant, &[u8]>,
) -> Result<ThresholdCore<C>, ProcessorMessage> {
let params = ThresholdParams::new(
params.t(),
params.n(),
Participant::new(u16::from(params.i()) + u16::try_from(m).unwrap()).unwrap(),
)
.unwrap();
// Parse the shares
let mut shares = HashMap::new();
for i in 1 ..= params.n() {
let i = Participant::new(i).unwrap();
let Some(share) = shares_ref.get_mut(&i) else { continue };
shares.insert(
i,
EncryptedMessage::<C, SecretShare<C::F>>::read(share, params).map_err(|_| {
ProcessorMessage::InvalidShare { id, accuser: params.i(), faulty: i, blame: None }
})?,
);
if participating_weight < usize::from(threshold) {
return vec![];
}
Ok(
(match machine.calculate_share(rng, shares) {
Ok(res) => res,
Err(e) => match e {
DkgError::ZeroParameter(_, _) |
DkgError::InvalidThreshold(_, _) |
DkgError::InvalidParticipant(_, _) |
DkgError::InvalidSigningSet |
DkgError::InvalidCommitments(_) => unreachable!("{e:?}"),
DkgError::InvalidParticipantQuantity(_, _) |
DkgError::DuplicatedParticipant(_) |
DkgError::MissingParticipant(_) => {
panic!("coordinator sent invalid DKG shares: {e:?}")
}
DkgError::InvalidShare { participant, blame } => {
Err(ProcessorMessage::InvalidShare {
id,
accuser: params.i(),
faulty: participant,
blame: Some(blame.map(|blame| blame.serialize())).flatten(),
})?
}
},
})
.complete(),
)
}
let mut substrate_keys = vec![];
let mut network_keys = vec![];
for (m, machines) in machines.into_iter().enumerate() {
let mut shares_ref: HashMap<Participant, &[u8]> =
shares[m].iter().map(|(i, shares)| (*i, shares.as_ref())).collect();
for (i, our_shares) in our_shares.iter().enumerate() {
if m != i {
assert!(shares_ref
.insert(
Participant::new(u16::from(params.i()) + u16::try_from(i).unwrap()).unwrap(),
our_shares
[&Participant::new(u16::from(params.i()) + u16::try_from(m).unwrap()).unwrap()]
.as_ref(),
)
.is_none());
}
}
let these_substrate_keys =
match handle_machine(&mut rng, id, params, m, machines.0, &mut shares_ref) {
Ok(keys) => keys,
Err(msg) => return msg,
};
let these_network_keys =
match handle_machine(&mut rng, id, params, m, machines.1, &mut shares_ref) {
Ok(keys) => keys,
Err(msg) => return msg,
};
for i in 1 ..= params.n() {
let i = Participant::new(i).unwrap();
let Some(shares) = shares_ref.get(&i) else { continue };
if !shares.is_empty() {
return ProcessorMessage::InvalidShare {
id,
accuser: these_substrate_keys.params().i(),
faulty: i,
blame: None,
};
}
}
let mut these_network_keys = ThresholdKeys::new(these_network_keys);
N::tweak_keys(&mut these_network_keys);
substrate_keys.push(these_substrate_keys);
network_keys.push(these_network_keys);
}
let mut generated_substrate_key = None;
let mut generated_network_key = None;
for keys in substrate_keys.iter().zip(&network_keys) {
if generated_substrate_key.is_none() {
generated_substrate_key = Some(keys.0.group_key());
generated_network_key = Some(keys.1.group_key());
// If we now have the threshold participating, verify their `Participation`s
fn verify_dkg<N: Network, C: EvrfCurve>(
txn: &mut impl DbTxn,
session: Session,
true_if_substrate_false_if_network: bool,
threshold: u16,
evrf_public_keys: &[impl AsRef<[u8]>],
substrate_participations: &mut HashMap<Participant, Vec<u8>>,
network_participations: &mut HashMap<Participant, Vec<u8>>,
) -> Result<EvrfDkg<C>, Vec<ProcessorMessage>> {
// Parse the `Participation`s
let participations = (if true_if_substrate_false_if_network {
&*substrate_participations
} else {
assert_eq!(generated_substrate_key, Some(keys.0.group_key()));
assert_eq!(generated_network_key, Some(keys.1.group_key()));
&*network_participations
})
.iter()
.map(|(key, participation)| {
(
*key,
Participation::read(
&mut participation.as_slice(),
evrf_public_keys.len().try_into().unwrap(),
)
.expect("prior read participation was invalid"),
)
})
.collect();
// Actually call verify on the DKG
match EvrfDkg::<C>::verify(
&mut OsRng,
generators(),
context::<N>(
session,
if true_if_substrate_false_if_network {
SUBSTRATE_KEY_CONTEXT
} else {
NETWORK_KEY_CONTEXT
},
),
threshold,
// Ignores the list of participants who were faulty, as they were prior blamed
&coerce_keys::<C>(evrf_public_keys).0,
&participations,
)
.unwrap()
{
// If the DKG was valid, return it
VerifyResult::Valid(dkg) => Ok(dkg),
// This DKG had faulty participants, so create blame messages for them
VerifyResult::Invalid(faulty) => {
let mut blames = vec![];
for participant in faulty {
// Remove from both maps for simplicity's sake
// There's no point in having one DKG complete yet not the other
assert!(substrate_participations.remove(&participant).is_some());
assert!(network_participations.remove(&participant).is_some());
blames.push(ProcessorMessage::Blame { session, participant });
}
// Since we removed `Participation`s, write the updated versions to the database
ParticipationDb::set(
txn,
&session,
&(substrate_participations.clone(), network_participations.clone()),
);
Err(blames)?
}
VerifyResult::NotEnoughParticipants => {
// This is the first DKG, and we checked we were at the threshold OR
// This is the second DKG, as the first had no invalid participants, so we're still
// at the threshold
panic!("not enough participants despite checking we were at the threshold")
}
}
}
GeneratedKeysDb::save_keys::<N>(txn, &id, &substrate_keys, &network_keys);
let substrate_dkg = match verify_dkg::<N, Ristretto>(
txn,
session,
true,
threshold,
&substrate_evrf_public_keys,
&mut substrate_participations,
&mut network_participations,
) {
Ok(dkg) => dkg,
// If we had any blames, immediately return them as necessary for the safety of
// `verify_dkg` (it assumes we don't call it again upon prior errors)
Err(blames) => return blames,
};
ProcessorMessage::GeneratedKeyPair {
id,
substrate_key: generated_substrate_key.unwrap().to_bytes(),
let network_dkg = match verify_dkg::<N, N::Curve>(
txn,
session,
false,
threshold,
&network_evrf_public_keys,
&mut substrate_participations,
&mut network_participations,
) {
Ok(dkg) => dkg,
Err(blames) => return blames,
};
// Get our keys from each DKG
// TODO: Some of these keys may be decrypted by us, yet not actually meant for us, if
// another validator set our eVRF public key as their eVRF public key. We either need to
// ensure the coordinator tracks amount of shares we're supposed to have by the eVRF public
// keys OR explicitly reduce to the keys we're supposed to have based on our `i` index.
let substrate_keys = substrate_dkg.keys(&self.substrate_evrf_private_key);
let mut network_keys = network_dkg.keys(&self.network_evrf_private_key);
// Tweak the keys for the network
for network_keys in &mut network_keys {
N::tweak_keys(network_keys);
}
GeneratedKeysDb::save_keys::<N>(txn, &session, &substrate_keys, &network_keys);
// Since no one we verified was invalid, and we had the threshold, yield the new keys
vec![ProcessorMessage::GeneratedKeyPair {
session,
substrate_key: substrate_keys[0].group_key().to_bytes(),
// TODO: This can be made more efficient since tweaked keys may be a subset of keys
network_key: generated_network_key.unwrap().to_bytes().as_ref().to_vec(),
}
}
CoordinatorMessage::VerifyBlame { id, accuser, accused, share, blame } => {
let params = ParamsDb::get(txn, &id.session, id.attempt).unwrap().0;
let mut share_ref = share.as_slice();
let Ok(substrate_share) = EncryptedMessage::<
Ristretto,
SecretShare<<Ristretto as Ciphersuite>::F>,
>::read(&mut share_ref, params) else {
return ProcessorMessage::Blame { id, participant: accused };
};
let Ok(network_share) = EncryptedMessage::<
N::Curve,
SecretShare<<N::Curve as Ciphersuite>::F>,
>::read(&mut share_ref, params) else {
return ProcessorMessage::Blame { id, participant: accused };
};
if !share_ref.is_empty() {
return ProcessorMessage::Blame { id, participant: accused };
}
let mut substrate_commitment_msgs = HashMap::new();
let mut network_commitment_msgs = HashMap::new();
let commitments = CommitmentsDb::get(txn, &id).unwrap();
for (i, commitments) in commitments {
let mut commitments = commitments.as_slice();
substrate_commitment_msgs
.insert(i, EncryptionKeyMessage::<_, _>::read(&mut commitments, params).unwrap());
network_commitment_msgs
.insert(i, EncryptionKeyMessage::<_, _>::read(&mut commitments, params).unwrap());
}
// There is a mild DoS here where someone with a valid blame bloats it to the maximum size
// Given the ambiguity, and limited potential to DoS (this being called means *someone* is
// getting fatally slashed) voids the need to ensure blame is minimal
let substrate_blame =
blame.clone().and_then(|blame| EncryptionKeyProof::read(&mut blame.as_slice()).ok());
let network_blame =
blame.clone().and_then(|blame| EncryptionKeyProof::read(&mut blame.as_slice()).ok());
let substrate_blame = AdditionalBlameMachine::new(
&mut rand_core::OsRng,
context(&id, SUBSTRATE_KEY_CONTEXT),
params.n(),
substrate_commitment_msgs,
)
.unwrap()
.blame(accuser, accused, substrate_share, substrate_blame);
let network_blame = AdditionalBlameMachine::new(
&mut rand_core::OsRng,
context(&id, NETWORK_KEY_CONTEXT),
params.n(),
network_commitment_msgs,
)
.unwrap()
.blame(accuser, accused, network_share, network_blame);
// If the accused was blamed for either, mark them as at fault
if (substrate_blame == accused) || (network_blame == accused) {
return ProcessorMessage::Blame { id, participant: accused };
}
ProcessorMessage::Blame { id, participant: accuser }
network_key: network_keys[0].group_key().to_bytes().as_ref().to_vec(),
}]
}
}
}