mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-09 04:39:24 +00:00
One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++ * Initial eVRF implementation Not quite done yet. It needs to communicate the resulting points and proofs to extract them from the Pedersen Commitments in order to return those, and then be tested. * Add the openings of the PCs to the eVRF as necessary * Add implementation of secq256k1 * Make DKG Encryption a bit more flexible No longer requires the use of an EncryptionKeyMessage, and allows pre-defined keys for encryption. * Make NUM_BITS an argument for the field macro * Have the eVRF take a Zeroizing private key * Initial eVRF-based DKG * Add embedwards25519 curve * Inline the eVRF into the DKG library Due to how we're handling share encryption, we'd either need two circuits or to dedicate this circuit to the DKG. The latter makes sense at this time. * Add documentation to the eVRF-based DKG * Add paragraph claiming robustness * Update to the new eVRF proof * Finish routing the eVRF functionality Still needs errors and serialization, along with a few other TODOs. * Add initial eVRF DKG test * Improve eVRF DKG Updates how we calculcate verification shares, improves performance when extracting multiple sets of keys, and adds more to the test for it. * Start using a proper error for the eVRF DKG * Resolve various TODOs Supports recovering multiple key shares from the eVRF DKG. Inlines two loops to save 2**16 iterations. Adds support for creating a constant time representation of scalars < NUM_BITS. * Ban zero ECDH keys, document non-zero requirements * Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519 * Add Ristretto eVRF trait impls * Support participating multiple times in the eVRF DKG * Only participate once per key, not once per key share * Rewrite processor key-gen around the eVRF DKG Still a WIP. * Finish routing the new key gen in the processor Doesn't touch the tests, coordinator, nor Substrate yet. `cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor` does pass. * Deduplicate and better document in processor key_gen * Update serai-processor tests to the new key gen * Correct amount of yx coefficients, get processor key gen test to pass * Add embedded elliptic curve keys to Substrate * Update processor key gen tests to the eVRF DKG * Have set_keys take signature_participants, not removed_participants Now no one is removed from the DKG. Only `t` people publish the key however. Uses a BitVec for an efficient encoding of the participants. * Update the coordinator binary for the new DKG This does not yet update any tests. * Add sensible Debug to key_gen::[Processor, Coordinator]Message * Have the DKG explicitly declare how to interpolate its shares Removes the hack for MuSig where we multiply keys by the inverse of their lagrange interpolation factor. * Replace Interpolation::None with Interpolation::Constant Allows the MuSig DKG to keep the secret share as the original private key, enabling deriving FROST nonces consistently regardless of the MuSig context. * Get coordinator tests to pass * Update spec to the new DKG * Get clippy to pass across the repo * cargo machete * Add an extra sleep to ensure expected ordering of `Participation`s * Update orchestration * Remove bad panic in coordinator It expected ConfirmationShare to be n-of-n, not t-of-n. * Improve documentation on functions * Update TX size limit We now no longer have to support the ridiculous case of having 49 DKG participations within a 101-of-150 DKG. It does remain quite high due to needing to _sign_ so many times. It'd may be optimal for parties with multiple key shares to independently send their preprocesses/shares (despite the overhead that'll cause with signatures and the transaction structure). * Correct error in the Processor spec document * Update a few comments in the validator-sets pallet * Send/Recv Participation one at a time Sending all, then attempting to receive all in an expected order, wasn't working even with notable delays between sending messages. This points to the mempool not working as expected... * Correct ThresholdKeys serialization in modular-frost test * Updating existing TX size limit test for the new DKG parameters * Increase time allowed for the DKG on the GH CI * Correct construction of signature_participants in serai-client tests Fault identified by akil. * Further contextualize DkgConfirmer by ValidatorSet Caught by a safety check we wouldn't reuse preprocesses across messages. That raises the question of we were prior reusing preprocesses (reusing keys)? Except that'd have caused a variety of signing failures (suggesting we had some staggered timing avoiding it in practice but yes, this was possible in theory). * Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests * Correct shimmed setting of a secq256k1 key * cargo fmt * Don't use `[0; 32]` for the embedded keys in the coordinator rotation test The key_gen function expects the random values already decided. * Big-endian secq256k1 scalars Also restores the prior, safer, Encryption::register function.
This commit is contained in:
@@ -36,9 +36,26 @@ multiexp = { path = "../multiexp", version = "0.4", default-features = false }
|
||||
schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false }
|
||||
dleq = { path = "../dleq", version = "^0.4.1", default-features = false }
|
||||
|
||||
# eVRF DKG dependencies
|
||||
subtle = { version = "2", default-features = false, features = ["std"], optional = true }
|
||||
generic-array = { version = "1", default-features = false, features = ["alloc"], optional = true }
|
||||
blake2 = { version = "0.10", default-features = false, features = ["std"], optional = true }
|
||||
rand_chacha = { version = "0.3", default-features = false, features = ["std"], optional = true }
|
||||
generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", default-features = false, optional = true }
|
||||
ec-divisors = { path = "../evrf/divisors", default-features = false, optional = true }
|
||||
generalized-bulletproofs-circuit-abstraction = { path = "../evrf/circuit-abstraction", optional = true }
|
||||
generalized-bulletproofs-ec-gadgets = { path = "../evrf/ec-gadgets", optional = true }
|
||||
|
||||
secq256k1 = { path = "../evrf/secq256k1", optional = true }
|
||||
embedwards25519 = { path = "../evrf/embedwards25519", optional = true }
|
||||
|
||||
[dev-dependencies]
|
||||
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
|
||||
rand = { version = "0.8", default-features = false, features = ["std"] }
|
||||
ciphersuite = { path = "../ciphersuite", default-features = false, features = ["ristretto"] }
|
||||
generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", features = ["tests"] }
|
||||
ec-divisors = { path = "../evrf/divisors", features = ["pasta"] }
|
||||
pasta_curves = "0.5"
|
||||
|
||||
[features]
|
||||
std = [
|
||||
@@ -62,5 +79,22 @@ std = [
|
||||
"dleq/serialize"
|
||||
]
|
||||
borsh = ["dep:borsh"]
|
||||
evrf = [
|
||||
"std",
|
||||
|
||||
"dep:subtle",
|
||||
"dep:generic-array",
|
||||
|
||||
"dep:blake2",
|
||||
"dep:rand_chacha",
|
||||
|
||||
"dep:generalized-bulletproofs",
|
||||
"dep:ec-divisors",
|
||||
"dep:generalized-bulletproofs-circuit-abstraction",
|
||||
"dep:generalized-bulletproofs-ec-gadgets",
|
||||
]
|
||||
evrf-secp256k1 = ["evrf", "ciphersuite/secp256k1", "secq256k1"]
|
||||
evrf-ed25519 = ["evrf", "ciphersuite/ed25519", "embedwards25519"]
|
||||
evrf-ristretto = ["evrf", "ciphersuite/ristretto", "embedwards25519"]
|
||||
tests = ["rand_core/getrandom"]
|
||||
default = ["std"]
|
||||
|
||||
@@ -98,11 +98,11 @@ fn ecdh<C: Ciphersuite>(private: &Zeroizing<C::F>, public: C::G) -> Zeroizing<C:
|
||||
|
||||
// Each ecdh must be distinct. Reuse of an ecdh for multiple ciphers will cause the messages to be
|
||||
// leaked.
|
||||
fn cipher<C: Ciphersuite>(context: &str, ecdh: &Zeroizing<C::G>) -> ChaCha20 {
|
||||
fn cipher<C: Ciphersuite>(context: [u8; 32], ecdh: &Zeroizing<C::G>) -> ChaCha20 {
|
||||
// Ideally, we'd box this transcript with ZAlloc, yet that's only possible on nightly
|
||||
// TODO: https://github.com/serai-dex/serai/issues/151
|
||||
let mut transcript = RecommendedTranscript::new(b"DKG Encryption v0.2");
|
||||
transcript.append_message(b"context", context.as_bytes());
|
||||
transcript.append_message(b"context", context);
|
||||
|
||||
transcript.domain_separate(b"encryption_key");
|
||||
|
||||
@@ -134,7 +134,7 @@ fn cipher<C: Ciphersuite>(context: &str, ecdh: &Zeroizing<C::G>) -> ChaCha20 {
|
||||
|
||||
fn encrypt<R: RngCore + CryptoRng, C: Ciphersuite, E: Encryptable>(
|
||||
rng: &mut R,
|
||||
context: &str,
|
||||
context: [u8; 32],
|
||||
from: Participant,
|
||||
to: C::G,
|
||||
mut msg: Zeroizing<E>,
|
||||
@@ -197,7 +197,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
|
||||
pub(crate) fn invalidate_msg<R: RngCore + CryptoRng>(
|
||||
&mut self,
|
||||
rng: &mut R,
|
||||
context: &str,
|
||||
context: [u8; 32],
|
||||
from: Participant,
|
||||
) {
|
||||
// Invalidate the message by specifying a new key/Schnorr PoP
|
||||
@@ -219,7 +219,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
|
||||
pub(crate) fn invalidate_share_serialization<R: RngCore + CryptoRng>(
|
||||
&mut self,
|
||||
rng: &mut R,
|
||||
context: &str,
|
||||
context: [u8; 32],
|
||||
from: Participant,
|
||||
to: C::G,
|
||||
) {
|
||||
@@ -243,7 +243,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
|
||||
pub(crate) fn invalidate_share_value<R: RngCore + CryptoRng>(
|
||||
&mut self,
|
||||
rng: &mut R,
|
||||
context: &str,
|
||||
context: [u8; 32],
|
||||
from: Participant,
|
||||
to: C::G,
|
||||
) {
|
||||
@@ -300,14 +300,14 @@ impl<C: Ciphersuite> EncryptionKeyProof<C> {
|
||||
// This still doesn't mean the DKG offers an authenticated channel. The per-message keys have no
|
||||
// root of trust other than their existence in the assumed-to-exist external authenticated channel.
|
||||
fn pop_challenge<C: Ciphersuite>(
|
||||
context: &str,
|
||||
context: [u8; 32],
|
||||
nonce: C::G,
|
||||
key: C::G,
|
||||
sender: Participant,
|
||||
msg: &[u8],
|
||||
) -> C::F {
|
||||
let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Proof of Possession v0.2");
|
||||
transcript.append_message(b"context", context.as_bytes());
|
||||
transcript.append_message(b"context", context);
|
||||
|
||||
transcript.domain_separate(b"proof_of_possession");
|
||||
|
||||
@@ -323,9 +323,9 @@ fn pop_challenge<C: Ciphersuite>(
|
||||
C::hash_to_F(b"DKG-encryption-proof_of_possession", &transcript.challenge(b"schnorr"))
|
||||
}
|
||||
|
||||
fn encryption_key_transcript(context: &str) -> RecommendedTranscript {
|
||||
fn encryption_key_transcript(context: [u8; 32]) -> RecommendedTranscript {
|
||||
let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Correctness Proof v0.2");
|
||||
transcript.append_message(b"context", context.as_bytes());
|
||||
transcript.append_message(b"context", context);
|
||||
transcript
|
||||
}
|
||||
|
||||
@@ -337,58 +337,17 @@ pub(crate) enum DecryptionError {
|
||||
InvalidProof,
|
||||
}
|
||||
|
||||
// A simple box for managing encryption.
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct Encryption<C: Ciphersuite> {
|
||||
context: String,
|
||||
i: Option<Participant>,
|
||||
enc_key: Zeroizing<C::F>,
|
||||
enc_pub_key: C::G,
|
||||
// A simple box for managing decryption.
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct Decryption<C: Ciphersuite> {
|
||||
context: [u8; 32],
|
||||
enc_keys: HashMap<Participant, C::G>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> fmt::Debug for Encryption<C> {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fmt
|
||||
.debug_struct("Encryption")
|
||||
.field("context", &self.context)
|
||||
.field("i", &self.i)
|
||||
.field("enc_pub_key", &self.enc_pub_key)
|
||||
.field("enc_keys", &self.enc_keys)
|
||||
.finish_non_exhaustive()
|
||||
impl<C: Ciphersuite> Decryption<C> {
|
||||
pub(crate) fn new(context: [u8; 32]) -> Self {
|
||||
Self { context, enc_keys: HashMap::new() }
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Zeroize for Encryption<C> {
|
||||
fn zeroize(&mut self) {
|
||||
self.enc_key.zeroize();
|
||||
self.enc_pub_key.zeroize();
|
||||
for (_, mut value) in self.enc_keys.drain() {
|
||||
value.zeroize();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Encryption<C> {
|
||||
pub(crate) fn new<R: RngCore + CryptoRng>(
|
||||
context: String,
|
||||
i: Option<Participant>,
|
||||
rng: &mut R,
|
||||
) -> Self {
|
||||
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
|
||||
Self {
|
||||
context,
|
||||
i,
|
||||
enc_pub_key: C::generator() * enc_key.deref(),
|
||||
enc_key,
|
||||
enc_keys: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn registration<M: Message>(&self, msg: M) -> EncryptionKeyMessage<C, M> {
|
||||
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
|
||||
}
|
||||
|
||||
pub(crate) fn register<M: Message>(
|
||||
&mut self,
|
||||
participant: Participant,
|
||||
@@ -402,13 +361,109 @@ impl<C: Ciphersuite> Encryption<C> {
|
||||
msg.msg
|
||||
}
|
||||
|
||||
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
|
||||
// Returns None if the key was wrong.
|
||||
pub(crate) fn decrypt_with_proof<E: Encryptable>(
|
||||
&self,
|
||||
from: Participant,
|
||||
decryptor: Participant,
|
||||
mut msg: EncryptedMessage<C, E>,
|
||||
// There's no encryption key proof if the accusation is of an invalid signature
|
||||
proof: Option<EncryptionKeyProof<C>>,
|
||||
) -> Result<Zeroizing<E>, DecryptionError> {
|
||||
if !msg.pop.verify(
|
||||
msg.key,
|
||||
pop_challenge::<C>(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
|
||||
) {
|
||||
Err(DecryptionError::InvalidSignature)?;
|
||||
}
|
||||
|
||||
if let Some(proof) = proof {
|
||||
// Verify this is the decryption key for this message
|
||||
proof
|
||||
.dleq
|
||||
.verify(
|
||||
&mut encryption_key_transcript(self.context),
|
||||
&[C::generator(), msg.key],
|
||||
&[self.enc_keys[&decryptor], *proof.key],
|
||||
)
|
||||
.map_err(|_| DecryptionError::InvalidProof)?;
|
||||
|
||||
cipher::<C>(self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut());
|
||||
Ok(msg.msg)
|
||||
} else {
|
||||
Err(DecryptionError::InvalidProof)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// A simple box for managing encryption.
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct Encryption<C: Ciphersuite> {
|
||||
context: [u8; 32],
|
||||
i: Participant,
|
||||
enc_key: Zeroizing<C::F>,
|
||||
enc_pub_key: C::G,
|
||||
decryption: Decryption<C>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> fmt::Debug for Encryption<C> {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fmt
|
||||
.debug_struct("Encryption")
|
||||
.field("context", &self.context)
|
||||
.field("i", &self.i)
|
||||
.field("enc_pub_key", &self.enc_pub_key)
|
||||
.field("decryption", &self.decryption)
|
||||
.finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Zeroize for Encryption<C> {
|
||||
fn zeroize(&mut self) {
|
||||
self.enc_key.zeroize();
|
||||
self.enc_pub_key.zeroize();
|
||||
for (_, mut value) in self.decryption.enc_keys.drain() {
|
||||
value.zeroize();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Encryption<C> {
|
||||
pub(crate) fn new<R: RngCore + CryptoRng>(
|
||||
context: [u8; 32],
|
||||
i: Participant,
|
||||
rng: &mut R,
|
||||
) -> Self {
|
||||
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
|
||||
Self {
|
||||
context,
|
||||
i,
|
||||
enc_pub_key: C::generator() * enc_key.deref(),
|
||||
enc_key,
|
||||
decryption: Decryption::new(context),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn registration<M: Message>(&self, msg: M) -> EncryptionKeyMessage<C, M> {
|
||||
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
|
||||
}
|
||||
|
||||
pub(crate) fn register<M: Message>(
|
||||
&mut self,
|
||||
participant: Participant,
|
||||
msg: EncryptionKeyMessage<C, M>,
|
||||
) -> M {
|
||||
self.decryption.register(participant, msg)
|
||||
}
|
||||
|
||||
pub(crate) fn encrypt<R: RngCore + CryptoRng, E: Encryptable>(
|
||||
&self,
|
||||
rng: &mut R,
|
||||
participant: Participant,
|
||||
msg: Zeroizing<E>,
|
||||
) -> EncryptedMessage<C, E> {
|
||||
encrypt(rng, &self.context, self.i.unwrap(), self.enc_keys[&participant], msg)
|
||||
encrypt(rng, self.context, self.i, self.decryption.enc_keys[&participant], msg)
|
||||
}
|
||||
|
||||
pub(crate) fn decrypt<R: RngCore + CryptoRng, I: Copy + Zeroize, E: Encryptable>(
|
||||
@@ -426,18 +481,18 @@ impl<C: Ciphersuite> Encryption<C> {
|
||||
batch,
|
||||
batch_id,
|
||||
msg.key,
|
||||
pop_challenge::<C>(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
|
||||
pop_challenge::<C>(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
|
||||
);
|
||||
|
||||
let key = ecdh::<C>(&self.enc_key, msg.key);
|
||||
cipher::<C>(&self.context, &key).apply_keystream(msg.msg.as_mut().as_mut());
|
||||
cipher::<C>(self.context, &key).apply_keystream(msg.msg.as_mut().as_mut());
|
||||
(
|
||||
msg.msg,
|
||||
EncryptionKeyProof {
|
||||
key,
|
||||
dleq: DLEqProof::prove(
|
||||
rng,
|
||||
&mut encryption_key_transcript(&self.context),
|
||||
&mut encryption_key_transcript(self.context),
|
||||
&[C::generator(), msg.key],
|
||||
&self.enc_key,
|
||||
),
|
||||
@@ -445,38 +500,7 @@ impl<C: Ciphersuite> Encryption<C> {
|
||||
)
|
||||
}
|
||||
|
||||
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
|
||||
// Returns None if the key was wrong.
|
||||
pub(crate) fn decrypt_with_proof<E: Encryptable>(
|
||||
&self,
|
||||
from: Participant,
|
||||
decryptor: Participant,
|
||||
mut msg: EncryptedMessage<C, E>,
|
||||
// There's no encryption key proof if the accusation is of an invalid signature
|
||||
proof: Option<EncryptionKeyProof<C>>,
|
||||
) -> Result<Zeroizing<E>, DecryptionError> {
|
||||
if !msg.pop.verify(
|
||||
msg.key,
|
||||
pop_challenge::<C>(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
|
||||
) {
|
||||
Err(DecryptionError::InvalidSignature)?;
|
||||
}
|
||||
|
||||
if let Some(proof) = proof {
|
||||
// Verify this is the decryption key for this message
|
||||
proof
|
||||
.dleq
|
||||
.verify(
|
||||
&mut encryption_key_transcript(&self.context),
|
||||
&[C::generator(), msg.key],
|
||||
&[self.enc_keys[&decryptor], *proof.key],
|
||||
)
|
||||
.map_err(|_| DecryptionError::InvalidProof)?;
|
||||
|
||||
cipher::<C>(&self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut());
|
||||
Ok(msg.msg)
|
||||
} else {
|
||||
Err(DecryptionError::InvalidProof)
|
||||
}
|
||||
pub(crate) fn into_decryption(self) -> Decryption<C> {
|
||||
self.decryption
|
||||
}
|
||||
}
|
||||
|
||||
584
crypto/dkg/src/evrf/mod.rs
Normal file
584
crypto/dkg/src/evrf/mod.rs
Normal file
@@ -0,0 +1,584 @@
|
||||
/*
|
||||
We implement a DKG using an eVRF, as detailed in the eVRF paper. For the eVRF itself, we do not
|
||||
use a Paillier-based construction, nor the detailed construction premised on a Bulletproof.
|
||||
|
||||
For reference, the detailed construction premised on a Bulletproof involves two curves, notated
|
||||
here as `C` and `E`, where the scalar field of `C` is the field of `E`. Accordingly, Bulletproofs
|
||||
over `C` can efficiently perform group operations of points of curve `E`. Each participant has a
|
||||
private point (`P_i`) on curve `E` committed to over curve `C`. The eVRF selects a pair of
|
||||
scalars `a, b`, where the participant proves in-Bulletproof the points `A_i, B_i` are
|
||||
`a * P_i, b * P_i`. The eVRF proceeds to commit to `A_i.x + B_i.x` in a Pedersen Commitment.
|
||||
|
||||
Our eVRF uses
|
||||
[Generalized Bulletproofs](
|
||||
https://repo.getmonero.org/monero-project/ccs-proposals
|
||||
/uploads/a9baa50c38c6312efc0fea5c6a188bb9/gbp.pdf
|
||||
).
|
||||
This allows us much larger witnesses without growing the reference string, and enables us to
|
||||
efficiently sample challenges off in-circuit variables (via placing the variables in a vector
|
||||
commitment, then challenging from a transcript of the commitments). We proceed to use
|
||||
[elliptic curve divisors](
|
||||
https://repo.getmonero.org/-/project/54/
|
||||
uploads/eb1bf5b4d4855a3480c38abf895bd8e8/Veridise_Divisor_Proofs.pdf
|
||||
)
|
||||
(which require the ability to sample a challenge off in-circuit variables) to prove discrete
|
||||
logarithms efficiently.
|
||||
|
||||
This is done via having a private scalar (`p_i`) on curve `E`, not a private point, and
|
||||
publishing the public key for it (`P_i = p_i * G`, where `G` is a generator of `E`). The eVRF
|
||||
samples two points with unknown discrete logarithms `A, B`, and the circuit proves a Pedersen
|
||||
Commitment commits to `(p_i * A).x + (p_i * B).x`.
|
||||
|
||||
With the eVRF established, we now detail our other novel aspect. The eVRF paper expects secret
|
||||
shares to be sent to the other parties yet does not detail a precise way to do so. If we
|
||||
encrypted the secret shares with some stream cipher, each recipient would have to attest validity
|
||||
or accuse the sender of impropriety. We want an encryption scheme where anyone can verify the
|
||||
secret shares were encrypted properly, without additional info, efficiently.
|
||||
|
||||
Please note from the published commitments, it's possible to calculcate a commitment to the
|
||||
secret share each party should receive (`V_i`).
|
||||
|
||||
We have the sender sample two scalars per recipient, denoted `x_i, y_i` (where `i` is the
|
||||
recipient index). They perform the eVRF to prove a Pedersen Commitment commits to
|
||||
`z_i = (x_i * P_i).x + (y_i * P_i).x` and `x_i, y_i` are the discrete logarithms of `X_i, Y_i`
|
||||
over `G`. They then publish the encrypted share `s_i + z_i` and `X_i, Y_i`.
|
||||
|
||||
The recipient is able to decrypt the share via calculating
|
||||
`s_i - ((p_i * X_i).x + (p_i * Y_i).x)`.
|
||||
|
||||
To verify the secret share, we have the `F` terms of the Pedersen Commitments revealed (where
|
||||
`F, H` are generators of `C`, `F` is used for binding and `H` for blinding). This already needs
|
||||
to be done for the eVRF outputs used within the DKG, in order to obtain thecommitments to the
|
||||
coefficients. When we have the commitment `Z_i = ((p_i * A).x + (p_i * B).x) * F`, we simply
|
||||
check `s_i * F = Z_i + V_i`.
|
||||
|
||||
In order to open the Pedersen Commitments to their `F` terms, we transcript the commitments and
|
||||
the claimed openings, then assign random weights to each pair of `(commitment, opening). The
|
||||
prover proves knowledge of the discrete logarithm of the sum weighted commitments, minus the sum
|
||||
sum weighted openings, over `H`.
|
||||
|
||||
The benefit to this construction is that given an broadcast channel which is reliable and
|
||||
ordered, only `t` messages must be broadcast from honest parties in order to create a `t`-of-`n`
|
||||
multisig. If the encrypted secret shares were not verifiable, one would need at least `t + n`
|
||||
messages to ensure every participant has a correct dealing and can participate in future
|
||||
reconstructions of the secret. This would also require all `n` parties be online, whereas this is
|
||||
robust to threshold `t`.
|
||||
*/
|
||||
|
||||
use core::ops::Deref;
|
||||
use std::{
|
||||
io::{self, Read, Write},
|
||||
collections::{HashSet, HashMap},
|
||||
};
|
||||
|
||||
use rand_core::{RngCore, CryptoRng};
|
||||
|
||||
use zeroize::{Zeroize, Zeroizing};
|
||||
|
||||
use blake2::{Digest, Blake2s256};
|
||||
use ciphersuite::{
|
||||
group::{
|
||||
ff::{Field, PrimeField},
|
||||
Group, GroupEncoding,
|
||||
},
|
||||
Ciphersuite,
|
||||
};
|
||||
use multiexp::multiexp_vartime;
|
||||
|
||||
use generalized_bulletproofs::arithmetic_circuit_proof::*;
|
||||
use ec_divisors::DivisorCurve;
|
||||
|
||||
use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore, ThresholdKeys};
|
||||
|
||||
pub(crate) mod proof;
|
||||
use proof::*;
|
||||
pub use proof::{EvrfCurve, EvrfGenerators};
|
||||
|
||||
/// Participation in the DKG.
|
||||
///
|
||||
/// `Participation` is meant to be broadcast to all other participants over an authenticated,
|
||||
/// reliable broadcast channel.
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Participation<C: Ciphersuite> {
|
||||
proof: Vec<u8>,
|
||||
encrypted_secret_shares: HashMap<Participant, C::F>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Participation<C> {
|
||||
pub fn read<R: Read>(reader: &mut R, n: u16) -> io::Result<Self> {
|
||||
// TODO: Replace `len` with some calculcation deterministic to the params
|
||||
let mut len = [0; 4];
|
||||
reader.read_exact(&mut len)?;
|
||||
let len = usize::try_from(u32::from_le_bytes(len)).expect("<32-bit platform?");
|
||||
|
||||
// Don't allocate a buffer for the claimed length
|
||||
// Read chunks until we reach the claimed length
|
||||
// This means if we were told to read GB, we must actually be sent GB before allocating as such
|
||||
const CHUNK_SIZE: usize = 1024;
|
||||
let mut proof = Vec::with_capacity(len.min(CHUNK_SIZE));
|
||||
while proof.len() < len {
|
||||
let next_chunk = (len - proof.len()).min(CHUNK_SIZE);
|
||||
let old_proof_len = proof.len();
|
||||
proof.resize(old_proof_len + next_chunk, 0);
|
||||
reader.read_exact(&mut proof[old_proof_len ..])?;
|
||||
}
|
||||
|
||||
let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
|
||||
for i in (1 ..= n).map(Participant) {
|
||||
encrypted_secret_shares.insert(i, C::read_F(reader)?);
|
||||
}
|
||||
|
||||
Ok(Self { proof, encrypted_secret_shares })
|
||||
}
|
||||
|
||||
pub fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
writer.write_all(&u32::try_from(self.proof.len()).unwrap().to_le_bytes())?;
|
||||
writer.write_all(&self.proof)?;
|
||||
for i in (1 ..= u16::try_from(self.encrypted_secret_shares.len())
|
||||
.expect("writing a Participation which has a n > u16::MAX"))
|
||||
.map(Participant)
|
||||
{
|
||||
writer.write_all(self.encrypted_secret_shares[&i].to_repr().as_ref())?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
fn polynomial<F: PrimeField + Zeroize>(
|
||||
coefficients: &[Zeroizing<F>],
|
||||
l: Participant,
|
||||
) -> Zeroizing<F> {
|
||||
let l = F::from(u64::from(u16::from(l)));
|
||||
// This should never be reached since Participant is explicitly non-zero
|
||||
assert!(l != F::ZERO, "zero participant passed to polynomial");
|
||||
let mut share = Zeroizing::new(F::ZERO);
|
||||
for (idx, coefficient) in coefficients.iter().rev().enumerate() {
|
||||
*share += coefficient.deref();
|
||||
if idx != (coefficients.len() - 1) {
|
||||
*share *= l;
|
||||
}
|
||||
}
|
||||
share
|
||||
}
|
||||
|
||||
#[allow(clippy::type_complexity)]
|
||||
fn share_verification_statements<C: Ciphersuite>(
|
||||
rng: &mut (impl RngCore + CryptoRng),
|
||||
commitments: &[C::G],
|
||||
n: u16,
|
||||
encryption_commitments: &[C::G],
|
||||
encrypted_secret_shares: &HashMap<Participant, C::F>,
|
||||
) -> (C::F, Vec<(C::F, C::G)>) {
|
||||
debug_assert_eq!(usize::from(n), encryption_commitments.len());
|
||||
debug_assert_eq!(usize::from(n), encrypted_secret_shares.len());
|
||||
|
||||
let mut g_scalar = C::F::ZERO;
|
||||
let mut pairs = Vec::with_capacity(commitments.len() + encryption_commitments.len());
|
||||
for commitment in commitments {
|
||||
pairs.push((C::F::ZERO, *commitment));
|
||||
}
|
||||
|
||||
let mut weight;
|
||||
for (i, enc_share) in encrypted_secret_shares {
|
||||
let enc_commitment = encryption_commitments[usize::from(u16::from(*i)) - 1];
|
||||
|
||||
weight = C::F::random(&mut *rng);
|
||||
|
||||
// s_i F
|
||||
g_scalar += weight * enc_share;
|
||||
// - Z_i
|
||||
let weight = -weight;
|
||||
pairs.push((weight, enc_commitment));
|
||||
// - V_i
|
||||
{
|
||||
let i = C::F::from(u64::from(u16::from(*i)));
|
||||
// The first `commitments.len()` pairs are for the commitments
|
||||
(0 .. commitments.len()).fold(weight, |exp, j| {
|
||||
pairs[j].0 += exp;
|
||||
exp * i
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
(g_scalar, pairs)
|
||||
}
|
||||
|
||||
/// Errors from the eVRF DKG.
|
||||
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
|
||||
pub enum EvrfError {
|
||||
#[error("n, the amount of participants, exceeded a u16")]
|
||||
TooManyParticipants,
|
||||
#[error("the threshold t wasn't in range 1 <= t <= n")]
|
||||
InvalidThreshold,
|
||||
#[error("a public key was the identity point")]
|
||||
PublicKeyWasIdentity,
|
||||
#[error("participating in a DKG we aren't a participant in")]
|
||||
NotAParticipant,
|
||||
#[error("a participant with an unrecognized ID participated")]
|
||||
NonExistentParticipant,
|
||||
#[error("the passed in generators did not have enough generators for this DKG")]
|
||||
NotEnoughGenerators,
|
||||
}
|
||||
|
||||
/// The result of calling EvrfDkg::verify.
|
||||
pub enum VerifyResult<C: EvrfCurve> {
|
||||
Valid(EvrfDkg<C>),
|
||||
Invalid(Vec<Participant>),
|
||||
NotEnoughParticipants,
|
||||
}
|
||||
|
||||
/// Struct to perform/verify the DKG with.
|
||||
#[derive(Debug)]
|
||||
pub struct EvrfDkg<C: EvrfCurve> {
|
||||
t: u16,
|
||||
n: u16,
|
||||
evrf_public_keys: Vec<<C::EmbeddedCurve as Ciphersuite>::G>,
|
||||
group_key: C::G,
|
||||
verification_shares: HashMap<Participant, C::G>,
|
||||
#[allow(clippy::type_complexity)]
|
||||
encrypted_secret_shares:
|
||||
HashMap<Participant, HashMap<Participant, ([<C::EmbeddedCurve as Ciphersuite>::G; 2], C::F)>>,
|
||||
}
|
||||
|
||||
impl<C: EvrfCurve> EvrfDkg<C> {
|
||||
// Form the initial transcript for the proofs.
|
||||
fn initial_transcript(
|
||||
invocation: [u8; 32],
|
||||
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
|
||||
t: u16,
|
||||
) -> [u8; 32] {
|
||||
let mut transcript = Blake2s256::new();
|
||||
transcript.update(invocation);
|
||||
for key in evrf_public_keys {
|
||||
transcript.update(key.to_bytes().as_ref());
|
||||
}
|
||||
transcript.update(t.to_le_bytes());
|
||||
transcript.finalize().into()
|
||||
}
|
||||
|
||||
/// Participate in performing the DKG for the specified parameters.
|
||||
///
|
||||
/// The context MUST be unique across invocations. Reuse of context will lead to sharing
|
||||
/// prior-shared secrets.
|
||||
///
|
||||
/// Public keys are not allowed to be the identity point. This will error if any are.
|
||||
pub fn participate(
|
||||
rng: &mut (impl RngCore + CryptoRng),
|
||||
generators: &EvrfGenerators<C>,
|
||||
context: [u8; 32],
|
||||
t: u16,
|
||||
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
|
||||
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
|
||||
) -> Result<Participation<C>, EvrfError> {
|
||||
let Ok(n) = u16::try_from(evrf_public_keys.len()) else { Err(EvrfError::TooManyParticipants)? };
|
||||
if (t == 0) || (t > n) {
|
||||
Err(EvrfError::InvalidThreshold)?;
|
||||
}
|
||||
if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) {
|
||||
Err(EvrfError::PublicKeyWasIdentity)?;
|
||||
};
|
||||
let evrf_public_key = <C::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref();
|
||||
if !evrf_public_keys.iter().any(|key| *key == evrf_public_key) {
|
||||
Err(EvrfError::NotAParticipant)?;
|
||||
};
|
||||
|
||||
let transcript = Self::initial_transcript(context, evrf_public_keys, t);
|
||||
// Further bind to the participant index so each index gets unique generators
|
||||
// This allows reusing eVRF public keys as the prover
|
||||
let mut per_proof_transcript = Blake2s256::new();
|
||||
per_proof_transcript.update(transcript);
|
||||
per_proof_transcript.update(evrf_public_key.to_bytes());
|
||||
|
||||
// The above transcript is expected to be binding to all arguments here
|
||||
// The generators are constant to this ciphersuite's generator, and the parameters are
|
||||
// transcripted
|
||||
let EvrfProveResult { coefficients, encryption_masks, proof } = match Evrf::prove(
|
||||
rng,
|
||||
&generators.0,
|
||||
per_proof_transcript.finalize().into(),
|
||||
usize::from(t),
|
||||
evrf_public_keys,
|
||||
evrf_private_key,
|
||||
) {
|
||||
Ok(res) => res,
|
||||
Err(AcError::NotEnoughGenerators) => Err(EvrfError::NotEnoughGenerators)?,
|
||||
Err(
|
||||
AcError::DifferingLrLengths |
|
||||
AcError::InconsistentAmountOfConstraints |
|
||||
AcError::ConstrainedNonExistentTerm |
|
||||
AcError::ConstrainedNonExistentCommitment |
|
||||
AcError::InconsistentWitness |
|
||||
AcError::Ip(_) |
|
||||
AcError::IncompleteProof,
|
||||
) => {
|
||||
panic!("failed to prove for the eVRF proof")
|
||||
}
|
||||
};
|
||||
|
||||
let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
|
||||
for (l, encryption_mask) in (1 ..= n).map(Participant).zip(encryption_masks) {
|
||||
let share = polynomial::<C::F>(&coefficients, l);
|
||||
encrypted_secret_shares.insert(l, *share + *encryption_mask);
|
||||
}
|
||||
|
||||
Ok(Participation { proof, encrypted_secret_shares })
|
||||
}
|
||||
|
||||
/// Check if a batch of `Participation`s are valid.
|
||||
///
|
||||
/// If any `Participation` is invalid, the list of all invalid participants will be returned.
|
||||
/// If all `Participation`s are valid and there's at least `t`, an instance of this struct
|
||||
/// (usable to obtain a threshold share of generated key) is returned. If all are valid and
|
||||
/// there's not at least `t`, `VerifyResult::NotEnoughParticipants` is returned.
|
||||
///
|
||||
/// This DKG is unbiased if all `n` people participate. This DKG is biased if only a threshold
|
||||
/// participate.
|
||||
pub fn verify(
|
||||
rng: &mut (impl RngCore + CryptoRng),
|
||||
generators: &EvrfGenerators<C>,
|
||||
context: [u8; 32],
|
||||
t: u16,
|
||||
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
|
||||
participations: &HashMap<Participant, Participation<C>>,
|
||||
) -> Result<VerifyResult<C>, EvrfError> {
|
||||
let Ok(n) = u16::try_from(evrf_public_keys.len()) else { Err(EvrfError::TooManyParticipants)? };
|
||||
if (t == 0) || (t > n) {
|
||||
Err(EvrfError::InvalidThreshold)?;
|
||||
}
|
||||
if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) {
|
||||
Err(EvrfError::PublicKeyWasIdentity)?;
|
||||
};
|
||||
for i in participations.keys() {
|
||||
if u16::from(*i) > n {
|
||||
Err(EvrfError::NonExistentParticipant)?;
|
||||
}
|
||||
}
|
||||
|
||||
let mut valid = HashMap::with_capacity(participations.len());
|
||||
let mut faulty = HashSet::new();
|
||||
|
||||
let transcript = Self::initial_transcript(context, evrf_public_keys, t);
|
||||
|
||||
let mut evrf_verifier = generators.0.batch_verifier();
|
||||
for (i, participation) in participations {
|
||||
let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1];
|
||||
|
||||
let mut per_proof_transcript = Blake2s256::new();
|
||||
per_proof_transcript.update(transcript);
|
||||
per_proof_transcript.update(evrf_public_key.to_bytes());
|
||||
|
||||
// Clone the verifier so if this proof is faulty, it doesn't corrupt the verifier
|
||||
let mut verifier_clone = evrf_verifier.clone();
|
||||
let Ok(data) = Evrf::<C>::verify(
|
||||
rng,
|
||||
&generators.0,
|
||||
&mut verifier_clone,
|
||||
per_proof_transcript.finalize().into(),
|
||||
usize::from(t),
|
||||
evrf_public_keys,
|
||||
evrf_public_key,
|
||||
&participation.proof,
|
||||
) else {
|
||||
faulty.insert(*i);
|
||||
continue;
|
||||
};
|
||||
evrf_verifier = verifier_clone;
|
||||
|
||||
valid.insert(*i, (participation.encrypted_secret_shares.clone(), data));
|
||||
}
|
||||
debug_assert_eq!(valid.len() + faulty.len(), participations.len());
|
||||
|
||||
// Perform the batch verification of the eVRFs
|
||||
if !generators.0.verify(evrf_verifier) {
|
||||
// If the batch failed, verify them each individually
|
||||
for (i, participation) in participations {
|
||||
if faulty.contains(i) {
|
||||
continue;
|
||||
}
|
||||
let mut evrf_verifier = generators.0.batch_verifier();
|
||||
Evrf::<C>::verify(
|
||||
rng,
|
||||
&generators.0,
|
||||
&mut evrf_verifier,
|
||||
context,
|
||||
usize::from(t),
|
||||
evrf_public_keys,
|
||||
evrf_public_keys[usize::from(u16::from(*i)) - 1],
|
||||
&participation.proof,
|
||||
)
|
||||
.expect("evrf failed basic checks yet prover wasn't prior marked faulty");
|
||||
if !generators.0.verify(evrf_verifier) {
|
||||
valid.remove(i);
|
||||
faulty.insert(*i);
|
||||
}
|
||||
}
|
||||
}
|
||||
debug_assert_eq!(valid.len() + faulty.len(), participations.len());
|
||||
|
||||
// Perform the batch verification of the shares
|
||||
let mut sum_encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
|
||||
let mut sum_masks = HashMap::with_capacity(usize::from(n));
|
||||
let mut all_encrypted_secret_shares = HashMap::with_capacity(usize::from(t));
|
||||
{
|
||||
let mut share_verification_statements_actual = HashMap::with_capacity(valid.len());
|
||||
if !{
|
||||
let mut g_scalar = C::F::ZERO;
|
||||
let mut pairs = Vec::with_capacity(valid.len() * (usize::from(t) + evrf_public_keys.len()));
|
||||
for (i, (encrypted_secret_shares, data)) in &valid {
|
||||
let (this_g_scalar, mut these_pairs) = share_verification_statements::<C>(
|
||||
&mut *rng,
|
||||
&data.coefficients,
|
||||
evrf_public_keys
|
||||
.len()
|
||||
.try_into()
|
||||
.expect("n prior checked to be <= u16::MAX couldn't be converted to a u16"),
|
||||
&data.encryption_commitments,
|
||||
encrypted_secret_shares,
|
||||
);
|
||||
// Queue this into our batch
|
||||
g_scalar += this_g_scalar;
|
||||
pairs.extend(&these_pairs);
|
||||
|
||||
// Also push this g_scalar onto these_pairs so these_pairs can be verified individually
|
||||
// upon error
|
||||
these_pairs.push((this_g_scalar, generators.0.g()));
|
||||
share_verification_statements_actual.insert(*i, these_pairs);
|
||||
|
||||
// Also format this data as we'd need it upon success
|
||||
let mut formatted_encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
|
||||
for (j, enc_share) in encrypted_secret_shares {
|
||||
/*
|
||||
We calculcate verification shares as the sum of the encrypted scalars, minus their
|
||||
masks. This only does one scalar multiplication, and `1+t` point additions (with
|
||||
one negation), and is accordingly much cheaper than interpolating the commitments.
|
||||
This is only possible because already interpolated the commitments to verify the
|
||||
encrypted secret share.
|
||||
*/
|
||||
let sum_encrypted_secret_share =
|
||||
sum_encrypted_secret_shares.get(j).copied().unwrap_or(C::F::ZERO);
|
||||
let sum_mask = sum_masks.get(j).copied().unwrap_or(C::G::identity());
|
||||
sum_encrypted_secret_shares.insert(*j, sum_encrypted_secret_share + enc_share);
|
||||
|
||||
let j_index = usize::from(u16::from(*j)) - 1;
|
||||
sum_masks.insert(*j, sum_mask + data.encryption_commitments[j_index]);
|
||||
|
||||
formatted_encrypted_secret_shares.insert(*j, (data.ecdh_keys[j_index], *enc_share));
|
||||
}
|
||||
all_encrypted_secret_shares.insert(*i, formatted_encrypted_secret_shares);
|
||||
}
|
||||
pairs.push((g_scalar, generators.0.g()));
|
||||
bool::from(multiexp_vartime(&pairs).is_identity())
|
||||
} {
|
||||
// If the batch failed, verify them each individually
|
||||
for (i, pairs) in share_verification_statements_actual {
|
||||
if !bool::from(multiexp_vartime(&pairs).is_identity()) {
|
||||
valid.remove(&i);
|
||||
faulty.insert(i);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
debug_assert_eq!(valid.len() + faulty.len(), participations.len());
|
||||
|
||||
let mut faulty = faulty.into_iter().collect::<Vec<_>>();
|
||||
if !faulty.is_empty() {
|
||||
faulty.sort_unstable();
|
||||
return Ok(VerifyResult::Invalid(faulty));
|
||||
}
|
||||
|
||||
// We check at least t key shares of people have participated in contributing entropy
|
||||
// Since the key shares of the participants exceed t, meaning if they're malicious they can
|
||||
// reconstruct the key regardless, this is safe to the threshold
|
||||
{
|
||||
let mut participating_weight = 0;
|
||||
let mut evrf_public_keys_mut = evrf_public_keys.to_vec();
|
||||
for i in valid.keys() {
|
||||
let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1];
|
||||
|
||||
// Remove this key from the Vec to prevent double-counting
|
||||
/*
|
||||
Double-counting would be a risk if multiple participants shared an eVRF public key and
|
||||
participated. This code does still allow such participants (in order to let participants
|
||||
be weighted), and any one of them participating will count as all participating. This is
|
||||
fine as any one such participant will be able to decrypt the shares for themselves and
|
||||
all other participants, so this is still a key generated by an amount of participants who
|
||||
could simply reconstruct the key.
|
||||
*/
|
||||
let start_len = evrf_public_keys_mut.len();
|
||||
evrf_public_keys_mut.retain(|key| *key != evrf_public_key);
|
||||
let end_len = evrf_public_keys_mut.len();
|
||||
let count = start_len - end_len;
|
||||
|
||||
participating_weight += count;
|
||||
}
|
||||
if participating_weight < usize::from(t) {
|
||||
return Ok(VerifyResult::NotEnoughParticipants);
|
||||
}
|
||||
}
|
||||
|
||||
// If we now have >= t participations, calculate the group key and verification shares
|
||||
|
||||
// The group key is the sum of the zero coefficients
|
||||
let group_key = valid.values().map(|(_, evrf_data)| evrf_data.coefficients[0]).sum::<C::G>();
|
||||
|
||||
// Calculate each user's verification share
|
||||
let mut verification_shares = HashMap::with_capacity(usize::from(n));
|
||||
for i in (1 ..= n).map(Participant) {
|
||||
verification_shares
|
||||
.insert(i, (C::generator() * sum_encrypted_secret_shares[&i]) - sum_masks[&i]);
|
||||
}
|
||||
|
||||
Ok(VerifyResult::Valid(EvrfDkg {
|
||||
t,
|
||||
n,
|
||||
evrf_public_keys: evrf_public_keys.to_vec(),
|
||||
group_key,
|
||||
verification_shares,
|
||||
encrypted_secret_shares: all_encrypted_secret_shares,
|
||||
}))
|
||||
}
|
||||
|
||||
pub fn keys(
|
||||
&self,
|
||||
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
|
||||
) -> Vec<ThresholdKeys<C>> {
|
||||
let evrf_public_key = <C::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref();
|
||||
let mut is = Vec::with_capacity(1);
|
||||
for (i, evrf_key) in self.evrf_public_keys.iter().enumerate() {
|
||||
if *evrf_key == evrf_public_key {
|
||||
let i = u16::try_from(i).expect("n <= u16::MAX yet i > u16::MAX?");
|
||||
let i = Participant(1 + i);
|
||||
is.push(i);
|
||||
}
|
||||
}
|
||||
|
||||
let mut res = Vec::with_capacity(is.len());
|
||||
for i in is {
|
||||
let mut secret_share = Zeroizing::new(C::F::ZERO);
|
||||
for shares in self.encrypted_secret_shares.values() {
|
||||
let (ecdh_keys, enc_share) = shares[&i];
|
||||
|
||||
let mut ecdh = Zeroizing::new(C::F::ZERO);
|
||||
for point in ecdh_keys {
|
||||
let (mut x, mut y) =
|
||||
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(point * evrf_private_key.deref()).unwrap();
|
||||
*ecdh += x;
|
||||
x.zeroize();
|
||||
y.zeroize();
|
||||
}
|
||||
*secret_share += enc_share - ecdh.deref();
|
||||
}
|
||||
|
||||
debug_assert_eq!(self.verification_shares[&i], C::generator() * secret_share.deref());
|
||||
|
||||
res.push(ThresholdKeys::from(ThresholdCore {
|
||||
params: ThresholdParams::new(self.t, self.n, i).unwrap(),
|
||||
interpolation: Interpolation::Lagrange,
|
||||
secret_share,
|
||||
group_key: self.group_key,
|
||||
verification_shares: self.verification_shares.clone(),
|
||||
}));
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
861
crypto/dkg/src/evrf/proof.rs
Normal file
861
crypto/dkg/src/evrf/proof.rs
Normal file
@@ -0,0 +1,861 @@
|
||||
use core::{marker::PhantomData, ops::Deref, fmt};
|
||||
|
||||
use subtle::*;
|
||||
use zeroize::{Zeroize, Zeroizing};
|
||||
|
||||
use rand_core::{RngCore, CryptoRng, SeedableRng};
|
||||
use rand_chacha::ChaCha20Rng;
|
||||
|
||||
use generic_array::{typenum::Unsigned, ArrayLength, GenericArray};
|
||||
|
||||
use blake2::{Digest, Blake2s256};
|
||||
use ciphersuite::{
|
||||
group::{
|
||||
ff::{Field, PrimeField, PrimeFieldBits},
|
||||
Group, GroupEncoding,
|
||||
},
|
||||
Ciphersuite,
|
||||
};
|
||||
|
||||
use generalized_bulletproofs::{
|
||||
*,
|
||||
transcript::{Transcript as ProverTranscript, VerifierTranscript},
|
||||
arithmetic_circuit_proof::*,
|
||||
};
|
||||
use generalized_bulletproofs_circuit_abstraction::*;
|
||||
|
||||
use ec_divisors::{DivisorCurve, new_divisor};
|
||||
use generalized_bulletproofs_ec_gadgets::*;
|
||||
|
||||
/// A pair of curves to perform the eVRF with.
|
||||
pub trait EvrfCurve: Ciphersuite {
|
||||
type EmbeddedCurve: Ciphersuite<G: DivisorCurve<FieldElement = <Self as Ciphersuite>::F>>;
|
||||
type EmbeddedCurveParameters: DiscreteLogParameters;
|
||||
}
|
||||
|
||||
#[cfg(feature = "evrf-secp256k1")]
|
||||
impl EvrfCurve for ciphersuite::Secp256k1 {
|
||||
type EmbeddedCurve = secq256k1::Secq256k1;
|
||||
type EmbeddedCurveParameters = secq256k1::Secq256k1;
|
||||
}
|
||||
|
||||
#[cfg(feature = "evrf-ed25519")]
|
||||
impl EvrfCurve for ciphersuite::Ed25519 {
|
||||
type EmbeddedCurve = embedwards25519::Embedwards25519;
|
||||
type EmbeddedCurveParameters = embedwards25519::Embedwards25519;
|
||||
}
|
||||
|
||||
#[cfg(feature = "evrf-ristretto")]
|
||||
impl EvrfCurve for ciphersuite::Ristretto {
|
||||
type EmbeddedCurve = embedwards25519::Embedwards25519;
|
||||
type EmbeddedCurveParameters = embedwards25519::Embedwards25519;
|
||||
}
|
||||
|
||||
fn sample_point<C: Ciphersuite>(rng: &mut (impl RngCore + CryptoRng)) -> C::G {
|
||||
let mut repr = <C::G as GroupEncoding>::Repr::default();
|
||||
loop {
|
||||
rng.fill_bytes(repr.as_mut());
|
||||
if let Ok(point) = C::read_G(&mut repr.as_ref()) {
|
||||
if bool::from(!point.is_identity()) {
|
||||
return point;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Generators for eVRF proof.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct EvrfGenerators<C: EvrfCurve>(pub(crate) Generators<C>);
|
||||
|
||||
impl<C: EvrfCurve> EvrfGenerators<C> {
|
||||
/// Create a new set of generators.
|
||||
pub fn new(max_threshold: u16, max_participants: u16) -> EvrfGenerators<C> {
|
||||
let g = C::generator();
|
||||
let mut rng = ChaCha20Rng::from_seed(Blake2s256::digest(g.to_bytes()).into());
|
||||
let h = sample_point::<C>(&mut rng);
|
||||
let (_, generators) =
|
||||
Evrf::<C>::muls_and_generators_to_use(max_threshold.into(), max_participants.into());
|
||||
let mut g_bold = vec![];
|
||||
let mut h_bold = vec![];
|
||||
for _ in 0 .. generators {
|
||||
g_bold.push(sample_point::<C>(&mut rng));
|
||||
h_bold.push(sample_point::<C>(&mut rng));
|
||||
}
|
||||
Self(Generators::new(g, h, g_bold, h_bold).unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
/// The result of proving for an eVRF.
|
||||
pub(crate) struct EvrfProveResult<C: Ciphersuite> {
|
||||
/// The coefficients for use in the DKG.
|
||||
pub(crate) coefficients: Vec<Zeroizing<C::F>>,
|
||||
/// The masks to encrypt secret shares with.
|
||||
pub(crate) encryption_masks: Vec<Zeroizing<C::F>>,
|
||||
/// The proof itself.
|
||||
pub(crate) proof: Vec<u8>,
|
||||
}
|
||||
|
||||
/// The result of verifying an eVRF.
|
||||
pub(crate) struct EvrfVerifyResult<C: EvrfCurve> {
|
||||
/// The commitments to the coefficients for use in the DKG.
|
||||
pub(crate) coefficients: Vec<C::G>,
|
||||
/// The ephemeral public keys to perform ECDHs with
|
||||
pub(crate) ecdh_keys: Vec<[<C::EmbeddedCurve as Ciphersuite>::G; 2]>,
|
||||
/// The commitments to the masks used to encrypt secret shares with.
|
||||
pub(crate) encryption_commitments: Vec<C::G>,
|
||||
}
|
||||
|
||||
impl<C: EvrfCurve> fmt::Debug for EvrfVerifyResult<C> {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fmt.debug_struct("EvrfVerifyResult").finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
/// A struct to prove/verify eVRFs with.
|
||||
pub(crate) struct Evrf<C: EvrfCurve>(PhantomData<C>);
|
||||
impl<C: EvrfCurve> Evrf<C> {
|
||||
// Sample uniform points (via rejection-sampling) on the embedded elliptic curve
|
||||
fn transcript_to_points(
|
||||
seed: [u8; 32],
|
||||
coefficients: usize,
|
||||
) -> Vec<<C::EmbeddedCurve as Ciphersuite>::G> {
|
||||
// We need to do two Diffie-Hellman's per coefficient in order to achieve an unbiased result
|
||||
let quantity = 2 * coefficients;
|
||||
|
||||
let mut rng = ChaCha20Rng::from_seed(seed);
|
||||
let mut res = Vec::with_capacity(quantity);
|
||||
for _ in 0 .. quantity {
|
||||
res.push(sample_point::<C::EmbeddedCurve>(&mut rng));
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
/// Read a Variable from a theoretical vector commitment tape
|
||||
fn read_one_from_tape(generators_to_use: usize, start: &mut usize) -> Variable {
|
||||
// Each commitment has twice as many variables as generators in use
|
||||
let commitment = *start / (2 * generators_to_use);
|
||||
// The index will be less than the amount of generators in use, as half are left and half are
|
||||
// right
|
||||
let index = *start % generators_to_use;
|
||||
let res = if (*start / generators_to_use) % 2 == 0 {
|
||||
Variable::CG { commitment, index }
|
||||
} else {
|
||||
Variable::CH { commitment, index }
|
||||
};
|
||||
*start += 1;
|
||||
res
|
||||
}
|
||||
|
||||
/// Read a set of variables from a theoretical vector commitment tape
|
||||
fn read_from_tape<N: ArrayLength>(
|
||||
generators_to_use: usize,
|
||||
start: &mut usize,
|
||||
) -> GenericArray<Variable, N> {
|
||||
let mut buf = Vec::with_capacity(N::USIZE);
|
||||
for _ in 0 .. N::USIZE {
|
||||
buf.push(Self::read_one_from_tape(generators_to_use, start));
|
||||
}
|
||||
GenericArray::from_slice(&buf).clone()
|
||||
}
|
||||
|
||||
/// Read `PointWithDlog`s, which share a discrete logarithm, from the theoretical vector
|
||||
/// commitment tape.
|
||||
fn point_with_dlogs(
|
||||
start: &mut usize,
|
||||
quantity: usize,
|
||||
generators_to_use: usize,
|
||||
) -> Vec<PointWithDlog<C::EmbeddedCurveParameters>> {
|
||||
// We define a serialized tape of the discrete logarithm, then for each divisor/point, we push:
|
||||
// zero, x**i, y x**i, y, x_coord, y_coord
|
||||
// We then chunk that into vector commitments
|
||||
// Here, we take the assumed layout and generate the expected `Variable`s for this layout
|
||||
|
||||
let dlog = Self::read_from_tape(generators_to_use, start);
|
||||
|
||||
let mut res = Vec::with_capacity(quantity);
|
||||
let mut read_point_with_dlog = || {
|
||||
let zero = Self::read_one_from_tape(generators_to_use, start);
|
||||
let x_from_power_of_2 = Self::read_from_tape(generators_to_use, start);
|
||||
let yx = Self::read_from_tape(generators_to_use, start);
|
||||
let y = Self::read_one_from_tape(generators_to_use, start);
|
||||
let divisor = Divisor { zero, x_from_power_of_2, yx, y };
|
||||
|
||||
let point = (
|
||||
Self::read_one_from_tape(generators_to_use, start),
|
||||
Self::read_one_from_tape(generators_to_use, start),
|
||||
);
|
||||
|
||||
res.push(PointWithDlog { dlog: dlog.clone(), divisor, point });
|
||||
};
|
||||
|
||||
for _ in 0 .. quantity {
|
||||
read_point_with_dlog();
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
fn muls_and_generators_to_use(coefficients: usize, ecdhs: usize) -> (usize, usize) {
|
||||
const MULS_PER_DH: usize = 7;
|
||||
// 1 DH to prove the discrete logarithm corresponds to the eVRF public key
|
||||
// 2 DHs per generated coefficient
|
||||
// 2 DHs per generated ECDH
|
||||
let expected_muls = MULS_PER_DH * (1 + (2 * coefficients) + (2 * 2 * ecdhs));
|
||||
let generators_to_use = {
|
||||
let mut padded_pow_of_2 = 1;
|
||||
while padded_pow_of_2 < expected_muls {
|
||||
padded_pow_of_2 <<= 1;
|
||||
}
|
||||
// This may as small as 16, which would create an excessive amount of vector commitments
|
||||
// We set a floor of 1024 rows for bandwidth reasons
|
||||
padded_pow_of_2.max(1024)
|
||||
};
|
||||
(expected_muls, generators_to_use)
|
||||
}
|
||||
|
||||
fn circuit(
|
||||
curve_spec: &CurveSpec<C::F>,
|
||||
evrf_public_key: (C::F, C::F),
|
||||
coefficients: usize,
|
||||
ecdh_commitments: &[[(C::F, C::F); 2]],
|
||||
generator_tables: &[GeneratorTable<C::F, C::EmbeddedCurveParameters>],
|
||||
circuit: &mut Circuit<C>,
|
||||
transcript: &mut impl Transcript,
|
||||
) {
|
||||
let (expected_muls, generators_to_use) =
|
||||
Self::muls_and_generators_to_use(coefficients, ecdh_commitments.len());
|
||||
let (challenge, challenged_generators) =
|
||||
circuit.discrete_log_challenge(transcript, curve_spec, generator_tables);
|
||||
debug_assert_eq!(challenged_generators.len(), 1 + (2 * coefficients) + ecdh_commitments.len());
|
||||
|
||||
// The generators tables/challenged generators are expected to have the following layouts
|
||||
// G, coefficients * [A, B], ecdhs * [P]
|
||||
#[allow(non_snake_case)]
|
||||
let challenged_G = &challenged_generators[0];
|
||||
|
||||
// Execute the circuit for the coefficients
|
||||
let mut tape_pos = 0;
|
||||
{
|
||||
let mut point_with_dlogs =
|
||||
Self::point_with_dlogs(&mut tape_pos, 1 + (2 * coefficients), generators_to_use)
|
||||
.into_iter();
|
||||
|
||||
// Verify the discrete logarithm is in the fact the discrete logarithm of the eVRF public key
|
||||
let point = circuit.discrete_log(
|
||||
curve_spec,
|
||||
point_with_dlogs.next().unwrap(),
|
||||
&challenge,
|
||||
challenged_G,
|
||||
);
|
||||
circuit.equality(LinComb::from(point.x()), &LinComb::empty().constant(evrf_public_key.0));
|
||||
circuit.equality(LinComb::from(point.y()), &LinComb::empty().constant(evrf_public_key.1));
|
||||
|
||||
// Verify the DLog claims against the sampled points
|
||||
for (i, pair) in challenged_generators[1 ..].chunks(2).take(coefficients).enumerate() {
|
||||
let mut lincomb = LinComb::empty();
|
||||
debug_assert_eq!(pair.len(), 2);
|
||||
for challenged_generator in pair {
|
||||
let point = circuit.discrete_log(
|
||||
curve_spec,
|
||||
point_with_dlogs.next().unwrap(),
|
||||
&challenge,
|
||||
challenged_generator,
|
||||
);
|
||||
// For each point in this pair, add its x coordinate to a lincomb
|
||||
lincomb = lincomb.term(C::F::ONE, point.x());
|
||||
}
|
||||
// Constrain the sum of the two x coordinates to be equal to the value in the Pedersen
|
||||
// commitment
|
||||
circuit.equality(lincomb, &LinComb::from(Variable::V(i)));
|
||||
}
|
||||
debug_assert!(point_with_dlogs.next().is_none());
|
||||
}
|
||||
|
||||
// Now execute the circuit for the ECDHs
|
||||
let mut challenged_generators = challenged_generators.iter().skip(1 + (2 * coefficients));
|
||||
for (i, ecdh) in ecdh_commitments.iter().enumerate() {
|
||||
let challenged_generator = challenged_generators.next().unwrap();
|
||||
let mut lincomb = LinComb::empty();
|
||||
for ecdh in ecdh {
|
||||
let mut point_with_dlogs =
|
||||
Self::point_with_dlogs(&mut tape_pos, 2, generators_to_use).into_iter();
|
||||
|
||||
// One proof of the ECDH secret * G for the commitment published
|
||||
let point = circuit.discrete_log(
|
||||
curve_spec,
|
||||
point_with_dlogs.next().unwrap(),
|
||||
&challenge,
|
||||
challenged_G,
|
||||
);
|
||||
circuit.equality(LinComb::from(point.x()), &LinComb::empty().constant(ecdh.0));
|
||||
circuit.equality(LinComb::from(point.y()), &LinComb::empty().constant(ecdh.1));
|
||||
|
||||
// One proof of the ECDH secret * P for the ECDH
|
||||
let point = circuit.discrete_log(
|
||||
curve_spec,
|
||||
point_with_dlogs.next().unwrap(),
|
||||
&challenge,
|
||||
challenged_generator,
|
||||
);
|
||||
// For each point in this pair, add its x coordinate to a lincomb
|
||||
lincomb = lincomb.term(C::F::ONE, point.x());
|
||||
}
|
||||
|
||||
// Constrain the sum of the two x coordinates to be equal to the value in the Pedersen
|
||||
// commitment
|
||||
circuit.equality(lincomb, &LinComb::from(Variable::V(coefficients + i)));
|
||||
}
|
||||
|
||||
debug_assert_eq!(expected_muls, circuit.muls());
|
||||
debug_assert!(challenged_generators.next().is_none());
|
||||
}
|
||||
|
||||
/// Convert a scalar to a sequence of coefficients for the polynomial 2**i, where the sum of the
|
||||
/// coefficients is F::NUM_BITS.
|
||||
///
|
||||
/// Despite the name, the returned coefficients are not guaranteed to be bits (0 or 1).
|
||||
///
|
||||
/// This scalar will presumably be used in a discrete log proof. That requires calculating a
|
||||
/// divisor which is variable time to the amount of points interpolated. Since the amount of
|
||||
/// points interpolated is equal to the sum of the coefficients in the polynomial, we need all
|
||||
/// scalars to have a constant sum of their coefficients (instead of one variable to its bits).
|
||||
///
|
||||
/// We achieve this by finding the highest non-0 coefficient, decrementing it, and increasing the
|
||||
/// immediately less significant coefficient by 2. This increases the sum of the coefficients by
|
||||
/// 1 (-1+2=1).
|
||||
fn scalar_to_bits(scalar: &<C::EmbeddedCurve as Ciphersuite>::F) -> Vec<u64> {
|
||||
let num_bits = u64::from(<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F::NUM_BITS);
|
||||
|
||||
// Obtain the bits of the private key
|
||||
let num_bits_usize = usize::try_from(num_bits).unwrap();
|
||||
let mut decomposition = vec![0; num_bits_usize];
|
||||
for (i, bit) in scalar.to_le_bits().into_iter().take(num_bits_usize).enumerate() {
|
||||
let bit = u64::from(u8::from(bit));
|
||||
decomposition[i] = bit;
|
||||
}
|
||||
|
||||
// The following algorithm only works if the value of the scalar exceeds num_bits
|
||||
// If it isn't, we increase it by the modulus such that it does exceed num_bits
|
||||
{
|
||||
let mut less_than_num_bits = Choice::from(0);
|
||||
for i in 0 .. num_bits {
|
||||
less_than_num_bits |= scalar.ct_eq(&<C::EmbeddedCurve as Ciphersuite>::F::from(i));
|
||||
}
|
||||
let mut decomposition_of_modulus = vec![0; num_bits_usize];
|
||||
// Decompose negative one
|
||||
for (i, bit) in (-<C::EmbeddedCurve as Ciphersuite>::F::ONE)
|
||||
.to_le_bits()
|
||||
.into_iter()
|
||||
.take(num_bits_usize)
|
||||
.enumerate()
|
||||
{
|
||||
let bit = u64::from(u8::from(bit));
|
||||
decomposition_of_modulus[i] = bit;
|
||||
}
|
||||
// Increment it by one
|
||||
decomposition_of_modulus[0] += 1;
|
||||
|
||||
// Add the decomposition onto the decomposition of the modulus
|
||||
for i in 0 .. num_bits_usize {
|
||||
let new_decomposition = <_>::conditional_select(
|
||||
&decomposition[i],
|
||||
&(decomposition[i] + decomposition_of_modulus[i]),
|
||||
less_than_num_bits,
|
||||
);
|
||||
decomposition[i] = new_decomposition;
|
||||
}
|
||||
}
|
||||
|
||||
// Calculcate the sum of the coefficients
|
||||
let mut sum_of_coefficients: u64 = 0;
|
||||
for decomposition in &decomposition {
|
||||
sum_of_coefficients += *decomposition;
|
||||
}
|
||||
|
||||
/*
|
||||
Now, because we added a log2(k)-bit number to a k-bit number, we may have our sum of
|
||||
coefficients be *too high*. We attempt to reduce the sum of the coefficients accordingly.
|
||||
|
||||
This algorithm is guaranteed to complete as expected. Take the sequence `222`. `222` becomes
|
||||
`032` becomes `013`. Even if the next coefficient in the sequence is `2`, the third
|
||||
coefficient will be reduced once and the next coefficient (`2`, increased to `3`) will only
|
||||
be eligible for reduction once. This demonstrates, even for a worst case of log2(k) `2`s
|
||||
followed by `1`s (as possible if the modulus is a Mersenne prime), the log2(k) `2`s can be
|
||||
reduced as necessary so long as there is a single coefficient after (requiring the entire
|
||||
sequence be at least of length log2(k) + 1). For a 2-bit number, log2(k) + 1 == 2, so this
|
||||
holds for any odd prime field.
|
||||
|
||||
To fully type out the demonstration for the Mersenne prime 3, with scalar to encode 1 (the
|
||||
highest value less than the number of bits):
|
||||
|
||||
10 - Little-endian bits of 1
|
||||
21 - Little-endian bits of 1, plus the modulus
|
||||
02 - After one reduction, where the sum of the coefficients does in fact equal 2 (the target)
|
||||
*/
|
||||
{
|
||||
let mut log2_num_bits = 0;
|
||||
while (1 << log2_num_bits) < num_bits {
|
||||
log2_num_bits += 1;
|
||||
}
|
||||
|
||||
for _ in 0 .. log2_num_bits {
|
||||
// If the sum of coefficients is the amount of bits, we're done
|
||||
let mut done = sum_of_coefficients.ct_eq(&num_bits);
|
||||
|
||||
for i in 0 .. (num_bits_usize - 1) {
|
||||
let should_act = (!done) & decomposition[i].ct_gt(&1);
|
||||
// Subtract 2 from this coefficient
|
||||
let amount_to_sub = <_>::conditional_select(&0, &2, should_act);
|
||||
decomposition[i] -= amount_to_sub;
|
||||
// Add 1 to the next coefficient
|
||||
let amount_to_add = <_>::conditional_select(&0, &1, should_act);
|
||||
decomposition[i + 1] += amount_to_add;
|
||||
|
||||
// Also update the sum of coefficients
|
||||
sum_of_coefficients -= <_>::conditional_select(&0, &1, should_act);
|
||||
|
||||
// If we updated the coefficients this loop iter, we're done for this loop iter
|
||||
done |= should_act;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _ in 0 .. num_bits {
|
||||
// If the sum of coefficients is the amount of bits, we're done
|
||||
let mut done = sum_of_coefficients.ct_eq(&num_bits);
|
||||
|
||||
// Find the highest coefficient currently non-zero
|
||||
for i in (1 .. decomposition.len()).rev() {
|
||||
// If this is non-zero, we should decrement this coefficient if we haven't already
|
||||
// decremented a coefficient this round
|
||||
let is_non_zero = !(0.ct_eq(&decomposition[i]));
|
||||
let should_act = (!done) & is_non_zero;
|
||||
|
||||
// Update this coefficient and the prior coefficient
|
||||
let amount_to_sub = <_>::conditional_select(&0, &1, should_act);
|
||||
decomposition[i] -= amount_to_sub;
|
||||
|
||||
let amount_to_add = <_>::conditional_select(&0, &2, should_act);
|
||||
// i must be at least 1, so i - 1 will be at least 0 (meaning it's safe to index with)
|
||||
decomposition[i - 1] += amount_to_add;
|
||||
|
||||
// Also update the sum of coefficients
|
||||
sum_of_coefficients += <_>::conditional_select(&0, &1, should_act);
|
||||
|
||||
// If we updated the coefficients this loop iter, we're done for this loop iter
|
||||
done |= should_act;
|
||||
}
|
||||
}
|
||||
debug_assert!(bool::from(decomposition.iter().sum::<u64>().ct_eq(&num_bits)));
|
||||
|
||||
decomposition
|
||||
}
|
||||
|
||||
/// Prove a point on an elliptic curve had its discrete logarithm generated via an eVRF.
|
||||
pub(crate) fn prove(
|
||||
rng: &mut (impl RngCore + CryptoRng),
|
||||
generators: &Generators<C>,
|
||||
transcript: [u8; 32],
|
||||
coefficients: usize,
|
||||
ecdh_public_keys: &[<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G],
|
||||
evrf_private_key: &Zeroizing<<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F>,
|
||||
) -> Result<EvrfProveResult<C>, AcError> {
|
||||
let curve_spec = CurveSpec {
|
||||
a: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::a(),
|
||||
b: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::b(),
|
||||
};
|
||||
|
||||
// A tape of the discrete logarithm, then [zero, x**i, y x**i, y, x_coord, y_coord]
|
||||
let mut vector_commitment_tape = vec![];
|
||||
|
||||
let mut generator_tables = Vec::with_capacity(1 + (2 * coefficients) + ecdh_public_keys.len());
|
||||
|
||||
// A function to calculate a divisor and push it onto the tape
|
||||
// This defines a vec, divisor_points, outside of the fn to reuse its allocation
|
||||
let mut divisor_points =
|
||||
Vec::with_capacity((<C::EmbeddedCurve as Ciphersuite>::F::NUM_BITS as usize) + 1);
|
||||
let mut divisor =
|
||||
|vector_commitment_tape: &mut Vec<_>,
|
||||
dlog: &[u64],
|
||||
push_generator: bool,
|
||||
generator: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G,
|
||||
dh: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G| {
|
||||
if push_generator {
|
||||
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(generator).unwrap();
|
||||
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
|
||||
}
|
||||
|
||||
{
|
||||
let mut generator = generator;
|
||||
for coefficient in dlog {
|
||||
let mut coefficient = *coefficient;
|
||||
while coefficient != 0 {
|
||||
coefficient -= 1;
|
||||
divisor_points.push(generator);
|
||||
}
|
||||
generator = generator.double();
|
||||
}
|
||||
debug_assert_eq!(
|
||||
dlog.iter().sum::<u64>(),
|
||||
u64::from(<C::EmbeddedCurve as Ciphersuite>::F::NUM_BITS)
|
||||
);
|
||||
}
|
||||
divisor_points.push(-dh);
|
||||
let mut divisor = new_divisor(&divisor_points).unwrap().normalize_x_coefficient();
|
||||
divisor_points.zeroize();
|
||||
|
||||
vector_commitment_tape.push(divisor.zero_coefficient);
|
||||
|
||||
for coefficient in divisor.x_coefficients.iter().skip(1) {
|
||||
vector_commitment_tape.push(*coefficient);
|
||||
}
|
||||
for _ in divisor.x_coefficients.len() ..
|
||||
<C::EmbeddedCurveParameters as DiscreteLogParameters>::XCoefficientsMinusOne::USIZE
|
||||
{
|
||||
vector_commitment_tape.push(<C as Ciphersuite>::F::ZERO);
|
||||
}
|
||||
|
||||
for coefficient in divisor.yx_coefficients.first().unwrap_or(&vec![]) {
|
||||
vector_commitment_tape.push(*coefficient);
|
||||
}
|
||||
for _ in divisor.yx_coefficients.first().unwrap_or(&vec![]).len() ..
|
||||
<C::EmbeddedCurveParameters as DiscreteLogParameters>::YxCoefficients::USIZE
|
||||
{
|
||||
vector_commitment_tape.push(<C as Ciphersuite>::F::ZERO);
|
||||
}
|
||||
|
||||
vector_commitment_tape
|
||||
.push(divisor.y_coefficients.first().copied().unwrap_or(<C as Ciphersuite>::F::ZERO));
|
||||
|
||||
divisor.zeroize();
|
||||
drop(divisor);
|
||||
|
||||
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(dh).unwrap();
|
||||
vector_commitment_tape.push(x);
|
||||
vector_commitment_tape.push(y);
|
||||
|
||||
(x, y)
|
||||
};
|
||||
|
||||
// Start with the coefficients
|
||||
let evrf_public_key;
|
||||
let mut actual_coefficients = Vec::with_capacity(coefficients);
|
||||
{
|
||||
let mut dlog = Self::scalar_to_bits(evrf_private_key);
|
||||
let points = Self::transcript_to_points(transcript, coefficients);
|
||||
|
||||
// Start by pushing the discrete logarithm onto the tape
|
||||
for coefficient in &dlog {
|
||||
vector_commitment_tape.push(<_>::from(*coefficient));
|
||||
}
|
||||
|
||||
// Push a divisor for proving that we're using the correct scalar
|
||||
evrf_public_key = divisor(
|
||||
&mut vector_commitment_tape,
|
||||
&dlog,
|
||||
true,
|
||||
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator(),
|
||||
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref(),
|
||||
);
|
||||
|
||||
// Push a divisor for each point we use in the eVRF
|
||||
for pair in points.chunks(2) {
|
||||
let mut res = Zeroizing::new(C::F::ZERO);
|
||||
for point in pair {
|
||||
let (dh_x, _) = divisor(
|
||||
&mut vector_commitment_tape,
|
||||
&dlog,
|
||||
true,
|
||||
*point,
|
||||
*point * evrf_private_key.deref(),
|
||||
);
|
||||
*res += dh_x;
|
||||
}
|
||||
actual_coefficients.push(res);
|
||||
}
|
||||
debug_assert_eq!(actual_coefficients.len(), coefficients);
|
||||
|
||||
dlog.zeroize();
|
||||
}
|
||||
|
||||
// Now do the ECDHs for the encryption
|
||||
let mut encryption_masks = Vec::with_capacity(ecdh_public_keys.len());
|
||||
let mut ecdh_commitments = Vec::with_capacity(2 * ecdh_public_keys.len());
|
||||
let mut ecdh_commitments_xy = Vec::with_capacity(ecdh_public_keys.len());
|
||||
for ecdh_public_key in ecdh_public_keys {
|
||||
ecdh_commitments_xy.push([(C::F::ZERO, C::F::ZERO); 2]);
|
||||
|
||||
let mut res = Zeroizing::new(C::F::ZERO);
|
||||
for j in 0 .. 2 {
|
||||
let mut ecdh_private_key;
|
||||
loop {
|
||||
ecdh_private_key = <C::EmbeddedCurve as Ciphersuite>::F::random(&mut *rng);
|
||||
// Generate a non-0 ECDH private key, as necessary to not produce an identity output
|
||||
// Identity isn't representable with the divisors, hence the explicit effort
|
||||
if bool::from(!ecdh_private_key.is_zero()) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
let mut dlog = Self::scalar_to_bits(&ecdh_private_key);
|
||||
let ecdh_commitment = <C::EmbeddedCurve as Ciphersuite>::generator() * ecdh_private_key;
|
||||
ecdh_commitments.push(ecdh_commitment);
|
||||
ecdh_commitments_xy.last_mut().unwrap()[j] =
|
||||
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_commitment).unwrap();
|
||||
|
||||
// Start by pushing the discrete logarithm onto the tape
|
||||
for coefficient in &dlog {
|
||||
vector_commitment_tape.push(<_>::from(*coefficient));
|
||||
}
|
||||
|
||||
// Push a divisor for proving that we're using the correct scalar for the commitment
|
||||
divisor(
|
||||
&mut vector_commitment_tape,
|
||||
&dlog,
|
||||
false,
|
||||
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator(),
|
||||
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator() * ecdh_private_key,
|
||||
);
|
||||
// Push a divisor for the key we're performing the ECDH with
|
||||
let (dh_x, _) = divisor(
|
||||
&mut vector_commitment_tape,
|
||||
&dlog,
|
||||
j == 0,
|
||||
*ecdh_public_key,
|
||||
*ecdh_public_key * ecdh_private_key,
|
||||
);
|
||||
*res += dh_x;
|
||||
|
||||
ecdh_private_key.zeroize();
|
||||
dlog.zeroize();
|
||||
}
|
||||
encryption_masks.push(res);
|
||||
}
|
||||
debug_assert_eq!(encryption_masks.len(), ecdh_public_keys.len());
|
||||
|
||||
// Now that we have the vector commitment tape, chunk it
|
||||
let (_, generators_to_use) =
|
||||
Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len());
|
||||
|
||||
let mut vector_commitments =
|
||||
Vec::with_capacity(vector_commitment_tape.len().div_ceil(2 * generators_to_use));
|
||||
for chunk in vector_commitment_tape.chunks(2 * generators_to_use) {
|
||||
let g_values = chunk[.. generators_to_use.min(chunk.len())].to_vec().into();
|
||||
let h_values = chunk[generators_to_use.min(chunk.len()) ..].to_vec().into();
|
||||
vector_commitments.push(PedersenVectorCommitment {
|
||||
g_values,
|
||||
h_values,
|
||||
mask: C::F::random(&mut *rng),
|
||||
});
|
||||
}
|
||||
|
||||
vector_commitment_tape.zeroize();
|
||||
drop(vector_commitment_tape);
|
||||
|
||||
let mut commitments = Vec::with_capacity(coefficients + ecdh_public_keys.len());
|
||||
for coefficient in &actual_coefficients {
|
||||
commitments.push(PedersenCommitment { value: **coefficient, mask: C::F::random(&mut *rng) });
|
||||
}
|
||||
for enc_mask in &encryption_masks {
|
||||
commitments.push(PedersenCommitment { value: **enc_mask, mask: C::F::random(&mut *rng) });
|
||||
}
|
||||
|
||||
let mut transcript = ProverTranscript::new(transcript);
|
||||
let commited_commitments = transcript.write_commitments(
|
||||
vector_commitments
|
||||
.iter()
|
||||
.map(|commitment| {
|
||||
commitment
|
||||
.commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h())
|
||||
.ok_or(AcError::NotEnoughGenerators)
|
||||
})
|
||||
.collect::<Result<_, _>>()?,
|
||||
commitments
|
||||
.iter()
|
||||
.map(|commitment| commitment.commit(generators.g(), generators.h()))
|
||||
.collect(),
|
||||
);
|
||||
for ecdh_commitment in ecdh_commitments {
|
||||
transcript.push_point(ecdh_commitment);
|
||||
}
|
||||
|
||||
let mut circuit = Circuit::prove(vector_commitments, commitments.clone());
|
||||
Self::circuit(
|
||||
&curve_spec,
|
||||
evrf_public_key,
|
||||
coefficients,
|
||||
&ecdh_commitments_xy,
|
||||
&generator_tables,
|
||||
&mut circuit,
|
||||
&mut transcript,
|
||||
);
|
||||
|
||||
let (statement, Some(witness)) = circuit
|
||||
.statement(
|
||||
generators.reduce(generators_to_use).ok_or(AcError::NotEnoughGenerators)?,
|
||||
commited_commitments,
|
||||
)
|
||||
.unwrap()
|
||||
else {
|
||||
panic!("proving yet wasn't yielded the witness");
|
||||
};
|
||||
statement.prove(&mut *rng, &mut transcript, witness).unwrap();
|
||||
|
||||
// Push the reveal onto the transcript
|
||||
for commitment in &commitments {
|
||||
transcript.push_point(generators.g() * commitment.value);
|
||||
}
|
||||
|
||||
// Define a weight to aggregate the commitments with
|
||||
let mut agg_weights = Vec::with_capacity(commitments.len());
|
||||
agg_weights.push(C::F::ONE);
|
||||
while agg_weights.len() < commitments.len() {
|
||||
agg_weights.push(transcript.challenge::<C::F>());
|
||||
}
|
||||
let mut x = commitments
|
||||
.iter()
|
||||
.zip(&agg_weights)
|
||||
.map(|(commitment, weight)| commitment.mask * *weight)
|
||||
.sum::<C::F>();
|
||||
|
||||
// Do a Schnorr PoK for the randomness of the aggregated Pedersen commitment
|
||||
let mut r = C::F::random(&mut *rng);
|
||||
transcript.push_point(generators.h() * r);
|
||||
let c = transcript.challenge::<C::F>();
|
||||
transcript.push_scalar(r + (c * x));
|
||||
r.zeroize();
|
||||
x.zeroize();
|
||||
|
||||
Ok(EvrfProveResult {
|
||||
coefficients: actual_coefficients,
|
||||
encryption_masks,
|
||||
proof: transcript.complete(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Verify an eVRF proof, returning the commitments output.
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub(crate) fn verify(
|
||||
rng: &mut (impl RngCore + CryptoRng),
|
||||
generators: &Generators<C>,
|
||||
verifier: &mut BatchVerifier<C>,
|
||||
transcript: [u8; 32],
|
||||
coefficients: usize,
|
||||
ecdh_public_keys: &[<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G],
|
||||
evrf_public_key: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G,
|
||||
proof: &[u8],
|
||||
) -> Result<EvrfVerifyResult<C>, ()> {
|
||||
let curve_spec = CurveSpec {
|
||||
a: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::a(),
|
||||
b: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::b(),
|
||||
};
|
||||
|
||||
let mut generator_tables = Vec::with_capacity(1 + (2 * coefficients) + ecdh_public_keys.len());
|
||||
{
|
||||
let (x, y) =
|
||||
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(<C::EmbeddedCurve as Ciphersuite>::generator())
|
||||
.unwrap();
|
||||
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
|
||||
}
|
||||
let points = Self::transcript_to_points(transcript, coefficients);
|
||||
for generator in points {
|
||||
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(generator).unwrap();
|
||||
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
|
||||
}
|
||||
for generator in ecdh_public_keys {
|
||||
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(*generator).unwrap();
|
||||
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
|
||||
}
|
||||
|
||||
let (_, generators_to_use) =
|
||||
Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len());
|
||||
|
||||
let mut transcript = VerifierTranscript::new(transcript, proof);
|
||||
|
||||
let dlog_len = <C::EmbeddedCurveParameters as DiscreteLogParameters>::ScalarBits::USIZE;
|
||||
let divisor_len = 1 +
|
||||
<C::EmbeddedCurveParameters as DiscreteLogParameters>::XCoefficientsMinusOne::USIZE +
|
||||
<C::EmbeddedCurveParameters as DiscreteLogParameters>::YxCoefficients::USIZE +
|
||||
1;
|
||||
let dlog_proof_len = divisor_len + 2;
|
||||
|
||||
let coeffs_vc_variables = dlog_len + ((1 + (2 * coefficients)) * dlog_proof_len);
|
||||
let ecdhs_vc_variables = ((2 * ecdh_public_keys.len()) * dlog_len) +
|
||||
((2 * 2 * ecdh_public_keys.len()) * dlog_proof_len);
|
||||
let vcs = (coeffs_vc_variables + ecdhs_vc_variables).div_ceil(2 * generators_to_use);
|
||||
|
||||
let all_commitments =
|
||||
transcript.read_commitments(vcs, coefficients + ecdh_public_keys.len()).map_err(|_| ())?;
|
||||
let commitments = all_commitments.V().to_vec();
|
||||
|
||||
let mut ecdh_keys = Vec::with_capacity(ecdh_public_keys.len());
|
||||
let mut ecdh_keys_xy = Vec::with_capacity(ecdh_public_keys.len());
|
||||
for _ in 0 .. ecdh_public_keys.len() {
|
||||
let ecdh_keys_i = [
|
||||
transcript.read_point::<C::EmbeddedCurve>().map_err(|_| ())?,
|
||||
transcript.read_point::<C::EmbeddedCurve>().map_err(|_| ())?,
|
||||
];
|
||||
ecdh_keys.push(ecdh_keys_i);
|
||||
// This bans zero ECDH keys
|
||||
ecdh_keys_xy.push([
|
||||
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_keys_i[0]).ok_or(())?,
|
||||
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_keys_i[1]).ok_or(())?,
|
||||
]);
|
||||
}
|
||||
|
||||
let mut circuit = Circuit::verify();
|
||||
Self::circuit(
|
||||
&curve_spec,
|
||||
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(evrf_public_key).ok_or(())?,
|
||||
coefficients,
|
||||
&ecdh_keys_xy,
|
||||
&generator_tables,
|
||||
&mut circuit,
|
||||
&mut transcript,
|
||||
);
|
||||
|
||||
let (statement, None) =
|
||||
circuit.statement(generators.reduce(generators_to_use).ok_or(())?, all_commitments).unwrap()
|
||||
else {
|
||||
panic!("verifying yet was yielded a witness");
|
||||
};
|
||||
|
||||
statement.verify(rng, verifier, &mut transcript).map_err(|_| ())?;
|
||||
|
||||
// Read the openings for the commitments
|
||||
let mut openings = Vec::with_capacity(commitments.len());
|
||||
for _ in 0 .. commitments.len() {
|
||||
openings.push(transcript.read_point::<C>().map_err(|_| ())?);
|
||||
}
|
||||
|
||||
// Verify the openings of the commitments
|
||||
let mut agg_weights = Vec::with_capacity(commitments.len());
|
||||
agg_weights.push(C::F::ONE);
|
||||
while agg_weights.len() < commitments.len() {
|
||||
agg_weights.push(transcript.challenge::<C::F>());
|
||||
}
|
||||
|
||||
let sum_points =
|
||||
openings.iter().zip(&agg_weights).map(|(point, weight)| *point * *weight).sum::<C::G>();
|
||||
let sum_commitments =
|
||||
commitments.into_iter().zip(agg_weights).map(|(point, weight)| point * weight).sum::<C::G>();
|
||||
#[allow(non_snake_case)]
|
||||
let A = sum_commitments - sum_points;
|
||||
|
||||
#[allow(non_snake_case)]
|
||||
let R = transcript.read_point::<C>().map_err(|_| ())?;
|
||||
let c = transcript.challenge::<C::F>();
|
||||
let s = transcript.read_scalar::<C>().map_err(|_| ())?;
|
||||
|
||||
// Doesn't batch verify this as we can't access the internals of the GBP batch verifier
|
||||
if (R + (A * c)) != (generators.h() * s) {
|
||||
Err(())?;
|
||||
}
|
||||
|
||||
if !transcript.complete().is_empty() {
|
||||
Err(())?
|
||||
};
|
||||
|
||||
let encryption_commitments = openings[coefficients ..].to_vec();
|
||||
let coefficients = openings[.. coefficients].to_vec();
|
||||
Ok(EvrfVerifyResult { coefficients, ecdh_keys, encryption_commitments })
|
||||
}
|
||||
}
|
||||
@@ -21,6 +21,10 @@ pub mod encryption;
|
||||
#[cfg(feature = "std")]
|
||||
pub mod pedpop;
|
||||
|
||||
/// The one-round DKG described in the [eVRF paper](https://eprint.iacr.org/2024/397).
|
||||
#[cfg(all(feature = "std", feature = "evrf"))]
|
||||
pub mod evrf;
|
||||
|
||||
/// Promote keys between ciphersuites.
|
||||
#[cfg(feature = "std")]
|
||||
pub mod promote;
|
||||
@@ -205,25 +209,37 @@ mod lib {
|
||||
}
|
||||
}
|
||||
|
||||
/// Calculate the lagrange coefficient for a signing set.
|
||||
pub fn lagrange<F: PrimeField>(i: Participant, included: &[Participant]) -> F {
|
||||
let i_f = F::from(u64::from(u16::from(i)));
|
||||
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub(crate) enum Interpolation<F: Zeroize + PrimeField> {
|
||||
Constant(Vec<F>),
|
||||
Lagrange,
|
||||
}
|
||||
|
||||
let mut num = F::ONE;
|
||||
let mut denom = F::ONE;
|
||||
for l in included {
|
||||
if i == *l {
|
||||
continue;
|
||||
impl<F: Zeroize + PrimeField> Interpolation<F> {
|
||||
pub(crate) fn interpolation_factor(&self, i: Participant, included: &[Participant]) -> F {
|
||||
match self {
|
||||
Interpolation::Constant(c) => c[usize::from(u16::from(i) - 1)],
|
||||
Interpolation::Lagrange => {
|
||||
let i_f = F::from(u64::from(u16::from(i)));
|
||||
|
||||
let mut num = F::ONE;
|
||||
let mut denom = F::ONE;
|
||||
for l in included {
|
||||
if i == *l {
|
||||
continue;
|
||||
}
|
||||
|
||||
let share = F::from(u64::from(u16::from(*l)));
|
||||
num *= share;
|
||||
denom *= share - i_f;
|
||||
}
|
||||
|
||||
// Safe as this will only be 0 if we're part of the above loop
|
||||
// (which we have an if case to avoid)
|
||||
num * denom.invert().unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
let share = F::from(u64::from(u16::from(*l)));
|
||||
num *= share;
|
||||
denom *= share - i_f;
|
||||
}
|
||||
|
||||
// Safe as this will only be 0 if we're part of the above loop
|
||||
// (which we have an if case to avoid)
|
||||
num * denom.invert().unwrap()
|
||||
}
|
||||
|
||||
/// Keys and verification shares generated by a DKG.
|
||||
@@ -232,6 +248,8 @@ mod lib {
|
||||
pub struct ThresholdCore<C: Ciphersuite> {
|
||||
/// Threshold Parameters.
|
||||
pub(crate) params: ThresholdParams,
|
||||
/// The interpolation method used.
|
||||
pub(crate) interpolation: Interpolation<C::F>,
|
||||
|
||||
/// Secret share key.
|
||||
pub(crate) secret_share: Zeroizing<C::F>,
|
||||
@@ -246,6 +264,7 @@ mod lib {
|
||||
fmt
|
||||
.debug_struct("ThresholdCore")
|
||||
.field("params", &self.params)
|
||||
.field("interpolation", &self.interpolation)
|
||||
.field("group_key", &self.group_key)
|
||||
.field("verification_shares", &self.verification_shares)
|
||||
.finish_non_exhaustive()
|
||||
@@ -255,6 +274,7 @@ mod lib {
|
||||
impl<C: Ciphersuite> Zeroize for ThresholdCore<C> {
|
||||
fn zeroize(&mut self) {
|
||||
self.params.zeroize();
|
||||
self.interpolation.zeroize();
|
||||
self.secret_share.zeroize();
|
||||
self.group_key.zeroize();
|
||||
for share in self.verification_shares.values_mut() {
|
||||
@@ -266,16 +286,14 @@ mod lib {
|
||||
impl<C: Ciphersuite> ThresholdCore<C> {
|
||||
pub(crate) fn new(
|
||||
params: ThresholdParams,
|
||||
interpolation: Interpolation<C::F>,
|
||||
secret_share: Zeroizing<C::F>,
|
||||
verification_shares: HashMap<Participant, C::G>,
|
||||
) -> ThresholdCore<C> {
|
||||
let t = (1 ..= params.t()).map(Participant).collect::<Vec<_>>();
|
||||
ThresholdCore {
|
||||
params,
|
||||
secret_share,
|
||||
group_key: t.iter().map(|i| verification_shares[i] * lagrange::<C::F>(*i, &t)).sum(),
|
||||
verification_shares,
|
||||
}
|
||||
let group_key =
|
||||
t.iter().map(|i| verification_shares[i] * interpolation.interpolation_factor(*i, &t)).sum();
|
||||
ThresholdCore { params, interpolation, secret_share, group_key, verification_shares }
|
||||
}
|
||||
|
||||
/// Parameters for these keys.
|
||||
@@ -304,6 +322,15 @@ mod lib {
|
||||
writer.write_all(&self.params.t.to_le_bytes())?;
|
||||
writer.write_all(&self.params.n.to_le_bytes())?;
|
||||
writer.write_all(&self.params.i.to_bytes())?;
|
||||
match &self.interpolation {
|
||||
Interpolation::Constant(c) => {
|
||||
writer.write_all(&[0])?;
|
||||
for c in c {
|
||||
writer.write_all(c.to_repr().as_ref())?;
|
||||
}
|
||||
}
|
||||
Interpolation::Lagrange => writer.write_all(&[1])?,
|
||||
};
|
||||
let mut share_bytes = self.secret_share.to_repr();
|
||||
writer.write_all(share_bytes.as_ref())?;
|
||||
share_bytes.as_mut().zeroize();
|
||||
@@ -352,6 +379,20 @@ mod lib {
|
||||
)
|
||||
};
|
||||
|
||||
let mut interpolation = [0];
|
||||
reader.read_exact(&mut interpolation)?;
|
||||
let interpolation = match interpolation[0] {
|
||||
0 => Interpolation::Constant({
|
||||
let mut res = Vec::with_capacity(usize::from(n));
|
||||
for _ in 0 .. n {
|
||||
res.push(C::read_F(reader)?);
|
||||
}
|
||||
res
|
||||
}),
|
||||
1 => Interpolation::Lagrange,
|
||||
_ => Err(io::Error::other("invalid interpolation method"))?,
|
||||
};
|
||||
|
||||
let secret_share = Zeroizing::new(C::read_F(reader)?);
|
||||
|
||||
let mut verification_shares = HashMap::new();
|
||||
@@ -361,6 +402,7 @@ mod lib {
|
||||
|
||||
Ok(ThresholdCore::new(
|
||||
ThresholdParams::new(t, n, i).map_err(|_| io::Error::other("invalid parameters"))?,
|
||||
interpolation,
|
||||
secret_share,
|
||||
verification_shares,
|
||||
))
|
||||
@@ -383,6 +425,7 @@ mod lib {
|
||||
/// View of keys, interpolated and offset for usage.
|
||||
#[derive(Clone)]
|
||||
pub struct ThresholdView<C: Ciphersuite> {
|
||||
interpolation: Interpolation<C::F>,
|
||||
offset: C::F,
|
||||
group_key: C::G,
|
||||
included: Vec<Participant>,
|
||||
@@ -395,6 +438,7 @@ mod lib {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fmt
|
||||
.debug_struct("ThresholdView")
|
||||
.field("interpolation", &self.interpolation)
|
||||
.field("offset", &self.offset)
|
||||
.field("group_key", &self.group_key)
|
||||
.field("included", &self.included)
|
||||
@@ -480,12 +524,13 @@ mod lib {
|
||||
included.sort();
|
||||
|
||||
let mut secret_share = Zeroizing::new(
|
||||
lagrange::<C::F>(self.params().i(), &included) * self.secret_share().deref(),
|
||||
self.core.interpolation.interpolation_factor(self.params().i(), &included) *
|
||||
self.secret_share().deref(),
|
||||
);
|
||||
|
||||
let mut verification_shares = self.verification_shares();
|
||||
for (i, share) in &mut verification_shares {
|
||||
*share *= lagrange::<C::F>(*i, &included);
|
||||
*share *= self.core.interpolation.interpolation_factor(*i, &included);
|
||||
}
|
||||
|
||||
// The offset is included by adding it to the participant with the lowest ID
|
||||
@@ -496,6 +541,7 @@ mod lib {
|
||||
*verification_shares.get_mut(&included[0]).unwrap() += C::generator() * offset;
|
||||
|
||||
Ok(ThresholdView {
|
||||
interpolation: self.core.interpolation.clone(),
|
||||
offset,
|
||||
group_key: self.group_key(),
|
||||
secret_share,
|
||||
@@ -528,6 +574,14 @@ mod lib {
|
||||
&self.included
|
||||
}
|
||||
|
||||
/// Return the interpolation factor for a signer.
|
||||
pub fn interpolation_factor(&self, participant: Participant) -> Option<C::F> {
|
||||
if !self.included.contains(&participant) {
|
||||
None?
|
||||
}
|
||||
Some(self.interpolation.interpolation_factor(participant, &self.included))
|
||||
}
|
||||
|
||||
/// Return the interpolated, offset secret share.
|
||||
pub fn secret_share(&self) -> &Zeroizing<C::F> {
|
||||
&self.secret_share
|
||||
|
||||
@@ -7,8 +7,6 @@ use std_shims::collections::HashMap;
|
||||
#[cfg(feature = "std")]
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
#[cfg(feature = "std")]
|
||||
use ciphersuite::group::ff::Field;
|
||||
use ciphersuite::{
|
||||
group::{Group, GroupEncoding},
|
||||
Ciphersuite,
|
||||
@@ -16,7 +14,7 @@ use ciphersuite::{
|
||||
|
||||
use crate::DkgError;
|
||||
#[cfg(feature = "std")]
|
||||
use crate::{Participant, ThresholdParams, ThresholdCore, lagrange};
|
||||
use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore};
|
||||
|
||||
fn check_keys<C: Ciphersuite>(keys: &[C::G]) -> Result<u16, DkgError<()>> {
|
||||
if keys.is_empty() {
|
||||
@@ -67,6 +65,7 @@ pub fn musig_key<C: Ciphersuite>(context: &[u8], keys: &[C::G]) -> Result<C::G,
|
||||
let transcript = binding_factor_transcript::<C>(context, keys)?;
|
||||
let mut res = C::G::identity();
|
||||
for i in 1 ..= keys_len {
|
||||
// TODO: Calculate this with a multiexp
|
||||
res += keys[usize::from(i - 1)] * binding_factor::<C>(transcript.clone(), i);
|
||||
}
|
||||
Ok(res)
|
||||
@@ -104,38 +103,26 @@ pub fn musig<C: Ciphersuite>(
|
||||
binding.push(binding_factor::<C>(transcript.clone(), i));
|
||||
}
|
||||
|
||||
// Multiply our private key by our binding factor
|
||||
let mut secret_share = private_key.clone();
|
||||
*secret_share *= binding[pos];
|
||||
// Our secret share is our private key
|
||||
let secret_share = private_key.clone();
|
||||
|
||||
// Calculate verification shares
|
||||
let mut verification_shares = HashMap::new();
|
||||
// When this library offers a ThresholdView for a specific signing set, it applies the lagrange
|
||||
// factor
|
||||
// Since this is a n-of-n scheme, there's only one possible signing set, and one possible
|
||||
// lagrange factor
|
||||
// In the name of simplicity, we define the group key as the sum of all bound keys
|
||||
// Accordingly, the secret share must be multiplied by the inverse of the lagrange factor, along
|
||||
// with all verification shares
|
||||
// This is less performant than simply defining the group key as the sum of all post-lagrange
|
||||
// bound keys, yet the simplicity is preferred
|
||||
let included = (1 ..= keys_len)
|
||||
// This error also shouldn't be possible, for the same reasons as documented above
|
||||
.map(|l| Participant::new(l).ok_or(DkgError::InvalidSigningSet))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
let mut group_key = C::G::identity();
|
||||
for (l, p) in included.iter().enumerate() {
|
||||
let bound = keys[l] * binding[l];
|
||||
group_key += bound;
|
||||
for l in 1 ..= keys_len {
|
||||
let key = keys[usize::from(l) - 1];
|
||||
group_key += key * binding[usize::from(l - 1)];
|
||||
|
||||
let lagrange_inv = lagrange::<C::F>(*p, &included).invert().unwrap();
|
||||
if params.i() == *p {
|
||||
*secret_share *= lagrange_inv;
|
||||
}
|
||||
verification_shares.insert(*p, bound * lagrange_inv);
|
||||
// These errors also shouldn't be possible, for the same reasons as documented above
|
||||
verification_shares.insert(Participant::new(l).ok_or(DkgError::InvalidSigningSet)?, key);
|
||||
}
|
||||
debug_assert_eq!(C::generator() * secret_share.deref(), verification_shares[¶ms.i()]);
|
||||
debug_assert_eq!(musig_key::<C>(context, keys).unwrap(), group_key);
|
||||
|
||||
Ok(ThresholdCore { params, secret_share, group_key, verification_shares })
|
||||
Ok(ThresholdCore::new(
|
||||
params,
|
||||
Interpolation::Constant(binding),
|
||||
secret_share,
|
||||
verification_shares,
|
||||
))
|
||||
}
|
||||
|
||||
@@ -22,9 +22,9 @@ use multiexp::{multiexp_vartime, BatchVerifier};
|
||||
use schnorr::SchnorrSignature;
|
||||
|
||||
use crate::{
|
||||
Participant, DkgError, ThresholdParams, ThresholdCore, validate_map,
|
||||
Participant, DkgError, ThresholdParams, Interpolation, ThresholdCore, validate_map,
|
||||
encryption::{
|
||||
ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, EncryptionKeyProof,
|
||||
ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, Decryption, EncryptionKeyProof,
|
||||
DecryptionError,
|
||||
},
|
||||
};
|
||||
@@ -32,10 +32,10 @@ use crate::{
|
||||
type FrostError<C> = DkgError<EncryptionKeyProof<C>>;
|
||||
|
||||
#[allow(non_snake_case)]
|
||||
fn challenge<C: Ciphersuite>(context: &str, l: Participant, R: &[u8], Am: &[u8]) -> C::F {
|
||||
fn challenge<C: Ciphersuite>(context: [u8; 32], l: Participant, R: &[u8], Am: &[u8]) -> C::F {
|
||||
let mut transcript = RecommendedTranscript::new(b"DKG FROST v0.2");
|
||||
transcript.domain_separate(b"schnorr_proof_of_knowledge");
|
||||
transcript.append_message(b"context", context.as_bytes());
|
||||
transcript.append_message(b"context", context);
|
||||
transcript.append_message(b"participant", l.to_bytes());
|
||||
transcript.append_message(b"nonce", R);
|
||||
transcript.append_message(b"commitments", Am);
|
||||
@@ -86,15 +86,15 @@ impl<C: Ciphersuite> ReadWrite for Commitments<C> {
|
||||
#[derive(Debug, Zeroize)]
|
||||
pub struct KeyGenMachine<C: Ciphersuite> {
|
||||
params: ThresholdParams,
|
||||
context: String,
|
||||
context: [u8; 32],
|
||||
_curve: PhantomData<C>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> KeyGenMachine<C> {
|
||||
/// Create a new machine to generate a key.
|
||||
///
|
||||
/// The context string should be unique among multisigs.
|
||||
pub fn new(params: ThresholdParams, context: String) -> KeyGenMachine<C> {
|
||||
/// The context should be unique among multisigs.
|
||||
pub fn new(params: ThresholdParams, context: [u8; 32]) -> KeyGenMachine<C> {
|
||||
KeyGenMachine { params, context, _curve: PhantomData }
|
||||
}
|
||||
|
||||
@@ -129,11 +129,11 @@ impl<C: Ciphersuite> KeyGenMachine<C> {
|
||||
// There's no reason to spend the time and effort to make this deterministic besides a
|
||||
// general obsession with canonicity and determinism though
|
||||
r,
|
||||
challenge::<C>(&self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
|
||||
challenge::<C>(self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
|
||||
);
|
||||
|
||||
// Additionally create an encryption mechanism to protect the secret shares
|
||||
let encryption = Encryption::new(self.context.clone(), Some(self.params.i), rng);
|
||||
let encryption = Encryption::new(self.context, self.params.i, rng);
|
||||
|
||||
// Step 4: Broadcast
|
||||
let msg =
|
||||
@@ -225,7 +225,7 @@ impl<F: PrimeField> ReadWrite for SecretShare<F> {
|
||||
#[derive(Zeroize)]
|
||||
pub struct SecretShareMachine<C: Ciphersuite> {
|
||||
params: ThresholdParams,
|
||||
context: String,
|
||||
context: [u8; 32],
|
||||
coefficients: Vec<Zeroizing<C::F>>,
|
||||
our_commitments: Vec<C::G>,
|
||||
encryption: Encryption<C>,
|
||||
@@ -274,7 +274,7 @@ impl<C: Ciphersuite> SecretShareMachine<C> {
|
||||
&mut batch,
|
||||
l,
|
||||
msg.commitments[0],
|
||||
challenge::<C>(&self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
|
||||
challenge::<C>(self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
|
||||
);
|
||||
|
||||
commitments.insert(l, msg.commitments.drain(..).collect::<Vec<_>>());
|
||||
@@ -472,9 +472,10 @@ impl<C: Ciphersuite> KeyMachine<C> {
|
||||
let KeyMachine { commitments, encryption, params, secret } = self;
|
||||
Ok(BlameMachine {
|
||||
commitments,
|
||||
encryption,
|
||||
encryption: encryption.into_decryption(),
|
||||
result: Some(ThresholdCore {
|
||||
params,
|
||||
interpolation: Interpolation::Lagrange,
|
||||
secret_share: secret,
|
||||
group_key: stripes[0],
|
||||
verification_shares,
|
||||
@@ -486,7 +487,7 @@ impl<C: Ciphersuite> KeyMachine<C> {
|
||||
/// A machine capable of handling blame proofs.
|
||||
pub struct BlameMachine<C: Ciphersuite> {
|
||||
commitments: HashMap<Participant, Vec<C::G>>,
|
||||
encryption: Encryption<C>,
|
||||
encryption: Decryption<C>,
|
||||
result: Option<ThresholdCore<C>>,
|
||||
}
|
||||
|
||||
@@ -505,7 +506,6 @@ impl<C: Ciphersuite> Zeroize for BlameMachine<C> {
|
||||
for commitments in self.commitments.values_mut() {
|
||||
commitments.zeroize();
|
||||
}
|
||||
self.encryption.zeroize();
|
||||
self.result.zeroize();
|
||||
}
|
||||
}
|
||||
@@ -598,14 +598,13 @@ impl<C: Ciphersuite> AdditionalBlameMachine<C> {
|
||||
/// authenticated as having come from the supposed party and verified as valid. Usage of invalid
|
||||
/// commitments is considered undefined behavior, and may cause everything from inaccurate blame
|
||||
/// to panics.
|
||||
pub fn new<R: RngCore + CryptoRng>(
|
||||
rng: &mut R,
|
||||
context: String,
|
||||
pub fn new(
|
||||
context: [u8; 32],
|
||||
n: u16,
|
||||
mut commitment_msgs: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
|
||||
) -> Result<Self, FrostError<C>> {
|
||||
let mut commitments = HashMap::new();
|
||||
let mut encryption = Encryption::new(context, None, rng);
|
||||
let mut encryption = Decryption::new(context);
|
||||
for i in 1 ..= n {
|
||||
let i = Participant::new(i).unwrap();
|
||||
let Some(msg) = commitment_msgs.remove(&i) else { Err(DkgError::MissingParticipant(i))? };
|
||||
|
||||
@@ -113,6 +113,7 @@ impl<C1: Ciphersuite, C2: Ciphersuite<F = C1::F, G = C1::G>> GeneratorPromotion<
|
||||
Ok(ThresholdKeys {
|
||||
core: Arc::new(ThresholdCore::new(
|
||||
params,
|
||||
self.base.core.interpolation.clone(),
|
||||
self.base.secret_share().clone(),
|
||||
verification_shares,
|
||||
)),
|
||||
|
||||
79
crypto/dkg/src/tests/evrf/mod.rs
Normal file
79
crypto/dkg/src/tests/evrf/mod.rs
Normal file
@@ -0,0 +1,79 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use zeroize::Zeroizing;
|
||||
use rand_core::OsRng;
|
||||
use rand::seq::SliceRandom;
|
||||
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite};
|
||||
|
||||
use crate::{
|
||||
Participant,
|
||||
evrf::*,
|
||||
tests::{THRESHOLD, PARTICIPANTS, recover_key},
|
||||
};
|
||||
|
||||
mod proof;
|
||||
use proof::{Pallas, Vesta};
|
||||
|
||||
#[test]
|
||||
fn evrf_dkg() {
|
||||
let generators = EvrfGenerators::<Pallas>::new(THRESHOLD, PARTICIPANTS);
|
||||
let context = [0; 32];
|
||||
|
||||
let mut priv_keys = vec![];
|
||||
let mut pub_keys = vec![];
|
||||
for i in 0 .. PARTICIPANTS {
|
||||
let priv_key = <Vesta as Ciphersuite>::F::random(&mut OsRng);
|
||||
pub_keys.push(<Vesta as Ciphersuite>::generator() * priv_key);
|
||||
priv_keys.push((Participant::new(1 + i).unwrap(), Zeroizing::new(priv_key)));
|
||||
}
|
||||
|
||||
let mut participations = HashMap::new();
|
||||
// Shuffle the private keys so we iterate over a random subset of them
|
||||
priv_keys.shuffle(&mut OsRng);
|
||||
for (i, priv_key) in priv_keys.iter().take(usize::from(THRESHOLD)) {
|
||||
participations.insert(
|
||||
*i,
|
||||
EvrfDkg::<Pallas>::participate(
|
||||
&mut OsRng,
|
||||
&generators,
|
||||
context,
|
||||
THRESHOLD,
|
||||
&pub_keys,
|
||||
priv_key,
|
||||
)
|
||||
.unwrap(),
|
||||
);
|
||||
}
|
||||
|
||||
let VerifyResult::Valid(dkg) = EvrfDkg::<Pallas>::verify(
|
||||
&mut OsRng,
|
||||
&generators,
|
||||
context,
|
||||
THRESHOLD,
|
||||
&pub_keys,
|
||||
&participations,
|
||||
)
|
||||
.unwrap() else {
|
||||
panic!("verify didn't return VerifyResult::Valid")
|
||||
};
|
||||
|
||||
let mut group_key = None;
|
||||
let mut verification_shares = None;
|
||||
let mut all_keys = HashMap::new();
|
||||
for (i, priv_key) in priv_keys {
|
||||
let keys = dkg.keys(&priv_key).into_iter().next().unwrap();
|
||||
assert_eq!(keys.params().i(), i);
|
||||
assert_eq!(keys.params().t(), THRESHOLD);
|
||||
assert_eq!(keys.params().n(), PARTICIPANTS);
|
||||
group_key = group_key.or(Some(keys.group_key()));
|
||||
verification_shares = verification_shares.or(Some(keys.verification_shares()));
|
||||
assert_eq!(Some(keys.group_key()), group_key);
|
||||
assert_eq!(Some(keys.verification_shares()), verification_shares);
|
||||
|
||||
all_keys.insert(i, keys);
|
||||
}
|
||||
|
||||
// TODO: Test for all possible combinations of keys
|
||||
assert_eq!(Pallas::generator() * recover_key(&all_keys), group_key.unwrap());
|
||||
}
|
||||
118
crypto/dkg/src/tests/evrf/proof.rs
Normal file
118
crypto/dkg/src/tests/evrf/proof.rs
Normal file
@@ -0,0 +1,118 @@
|
||||
use std::time::Instant;
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use zeroize::{Zeroize, Zeroizing};
|
||||
use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2};
|
||||
use blake2::{Digest, Blake2b512};
|
||||
|
||||
use ciphersuite::{
|
||||
group::{
|
||||
ff::{FromUniformBytes, Field, PrimeField},
|
||||
Group,
|
||||
},
|
||||
Ciphersuite, Secp256k1, Ed25519, Ristretto,
|
||||
};
|
||||
use pasta_curves::{Ep, Eq, Fp, Fq};
|
||||
|
||||
use generalized_bulletproofs::tests::generators;
|
||||
use generalized_bulletproofs_ec_gadgets::DiscreteLogParameters;
|
||||
|
||||
use crate::evrf::proof::*;
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub(crate) struct Pallas;
|
||||
impl Ciphersuite for Pallas {
|
||||
type F = Fq;
|
||||
type G = Ep;
|
||||
type H = Blake2b512;
|
||||
const ID: &'static [u8] = b"Pallas";
|
||||
fn generator() -> Ep {
|
||||
Ep::generator()
|
||||
}
|
||||
fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F {
|
||||
// This naive concat may be insecure in a real world deployment
|
||||
// This is solely test code so it's fine
|
||||
Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub(crate) struct Vesta;
|
||||
impl Ciphersuite for Vesta {
|
||||
type F = Fp;
|
||||
type G = Eq;
|
||||
type H = Blake2b512;
|
||||
const ID: &'static [u8] = b"Vesta";
|
||||
fn generator() -> Eq {
|
||||
Eq::generator()
|
||||
}
|
||||
fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F {
|
||||
// This naive concat may be insecure in a real world deployment
|
||||
// This is solely test code so it's fine
|
||||
Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into())
|
||||
}
|
||||
}
|
||||
|
||||
pub struct VestaParams;
|
||||
impl DiscreteLogParameters for VestaParams {
|
||||
type ScalarBits = U<{ <<Vesta as Ciphersuite>::F as PrimeField>::NUM_BITS as usize }>;
|
||||
type XCoefficients = Quot<Sum<Self::ScalarBits, U1>, U2>;
|
||||
type XCoefficientsMinusOne = Diff<Self::XCoefficients, U1>;
|
||||
type YxCoefficients = Diff<Quot<Sum<Sum<Self::ScalarBits, U1>, U1>, U2>, U2>;
|
||||
}
|
||||
|
||||
impl EvrfCurve for Pallas {
|
||||
type EmbeddedCurve = Vesta;
|
||||
type EmbeddedCurveParameters = VestaParams;
|
||||
}
|
||||
|
||||
fn evrf_proof_test<C: EvrfCurve>() {
|
||||
let generators = generators(1024);
|
||||
let vesta_private_key = Zeroizing::new(<C::EmbeddedCurve as Ciphersuite>::F::random(&mut OsRng));
|
||||
let ecdh_public_keys = [
|
||||
<C::EmbeddedCurve as Ciphersuite>::G::random(&mut OsRng),
|
||||
<C::EmbeddedCurve as Ciphersuite>::G::random(&mut OsRng),
|
||||
];
|
||||
let time = Instant::now();
|
||||
let res =
|
||||
Evrf::<C>::prove(&mut OsRng, &generators, [0; 32], 1, &ecdh_public_keys, &vesta_private_key)
|
||||
.unwrap();
|
||||
println!("Proving time: {:?}", time.elapsed());
|
||||
|
||||
let time = Instant::now();
|
||||
let mut verifier = generators.batch_verifier();
|
||||
Evrf::<C>::verify(
|
||||
&mut OsRng,
|
||||
&generators,
|
||||
&mut verifier,
|
||||
[0; 32],
|
||||
1,
|
||||
&ecdh_public_keys,
|
||||
C::EmbeddedCurve::generator() * *vesta_private_key,
|
||||
&res.proof,
|
||||
)
|
||||
.unwrap();
|
||||
assert!(generators.verify(verifier));
|
||||
println!("Verifying time: {:?}", time.elapsed());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pallas_evrf_proof_test() {
|
||||
evrf_proof_test::<Pallas>();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn secp256k1_evrf_proof_test() {
|
||||
evrf_proof_test::<Secp256k1>();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ed25519_evrf_proof_test() {
|
||||
evrf_proof_test::<Ed25519>();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ristretto_evrf_proof_test() {
|
||||
evrf_proof_test::<Ristretto>();
|
||||
}
|
||||
@@ -6,7 +6,7 @@ use rand_core::{RngCore, CryptoRng};
|
||||
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite};
|
||||
|
||||
use crate::{Participant, ThresholdCore, ThresholdKeys, lagrange, musig::musig as musig_fn};
|
||||
use crate::{Participant, ThresholdCore, ThresholdKeys, musig::musig as musig_fn};
|
||||
|
||||
mod musig;
|
||||
pub use musig::test_musig;
|
||||
@@ -19,6 +19,9 @@ use pedpop::pedpop_gen;
|
||||
mod promote;
|
||||
use promote::test_generator_promotion;
|
||||
|
||||
#[cfg(all(test, feature = "evrf"))]
|
||||
mod evrf;
|
||||
|
||||
/// Constant amount of participants to use when testing.
|
||||
pub const PARTICIPANTS: u16 = 5;
|
||||
/// Constant threshold of participants to use when testing.
|
||||
@@ -43,7 +46,8 @@ pub fn recover_key<C: Ciphersuite>(keys: &HashMap<Participant, ThresholdKeys<C>>
|
||||
let included = keys.keys().copied().collect::<Vec<_>>();
|
||||
|
||||
let group_private = keys.iter().fold(C::F::ZERO, |accum, (i, keys)| {
|
||||
accum + (lagrange::<C::F>(*i, &included) * keys.secret_share().deref())
|
||||
accum +
|
||||
(first.core.interpolation.interpolation_factor(*i, &included) * keys.secret_share().deref())
|
||||
});
|
||||
assert_eq!(C::generator() * group_private, first.group_key(), "failed to recover keys");
|
||||
group_private
|
||||
|
||||
@@ -14,7 +14,7 @@ use crate::{
|
||||
type PedPoPEncryptedMessage<C> = EncryptedMessage<C, SecretShare<<C as Ciphersuite>::F>>;
|
||||
type PedPoPSecretShares<C> = HashMap<Participant, PedPoPEncryptedMessage<C>>;
|
||||
|
||||
const CONTEXT: &str = "DKG Test Key Generation";
|
||||
const CONTEXT: [u8; 32] = *b"DKG Test Key Generation ";
|
||||
|
||||
// Commit, then return commitment messages, enc keys, and shares
|
||||
#[allow(clippy::type_complexity)]
|
||||
@@ -31,7 +31,7 @@ fn commit_enc_keys_and_shares<R: RngCore + CryptoRng, C: Ciphersuite>(
|
||||
let mut enc_keys = HashMap::new();
|
||||
for i in (1 ..= PARTICIPANTS).map(Participant) {
|
||||
let params = ThresholdParams::new(THRESHOLD, PARTICIPANTS, i).unwrap();
|
||||
let machine = KeyGenMachine::<C>::new(params, CONTEXT.to_string());
|
||||
let machine = KeyGenMachine::<C>::new(params, CONTEXT);
|
||||
let (machine, these_commitments) = machine.generate_coefficients(rng);
|
||||
machines.insert(i, machine);
|
||||
|
||||
@@ -147,14 +147,12 @@ mod literal {
|
||||
|
||||
// Verify machines constructed with AdditionalBlameMachine::new work
|
||||
assert_eq!(
|
||||
AdditionalBlameMachine::new(
|
||||
&mut OsRng,
|
||||
CONTEXT.to_string(),
|
||||
PARTICIPANTS,
|
||||
commitment_msgs.clone()
|
||||
)
|
||||
.unwrap()
|
||||
.blame(ONE, TWO, msg.clone(), blame.clone()),
|
||||
AdditionalBlameMachine::new(CONTEXT, PARTICIPANTS, commitment_msgs.clone()).unwrap().blame(
|
||||
ONE,
|
||||
TWO,
|
||||
msg.clone(),
|
||||
blame.clone()
|
||||
),
|
||||
ONE,
|
||||
);
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user