mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-08 12:19:24 +00:00
One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++ * Initial eVRF implementation Not quite done yet. It needs to communicate the resulting points and proofs to extract them from the Pedersen Commitments in order to return those, and then be tested. * Add the openings of the PCs to the eVRF as necessary * Add implementation of secq256k1 * Make DKG Encryption a bit more flexible No longer requires the use of an EncryptionKeyMessage, and allows pre-defined keys for encryption. * Make NUM_BITS an argument for the field macro * Have the eVRF take a Zeroizing private key * Initial eVRF-based DKG * Add embedwards25519 curve * Inline the eVRF into the DKG library Due to how we're handling share encryption, we'd either need two circuits or to dedicate this circuit to the DKG. The latter makes sense at this time. * Add documentation to the eVRF-based DKG * Add paragraph claiming robustness * Update to the new eVRF proof * Finish routing the eVRF functionality Still needs errors and serialization, along with a few other TODOs. * Add initial eVRF DKG test * Improve eVRF DKG Updates how we calculcate verification shares, improves performance when extracting multiple sets of keys, and adds more to the test for it. * Start using a proper error for the eVRF DKG * Resolve various TODOs Supports recovering multiple key shares from the eVRF DKG. Inlines two loops to save 2**16 iterations. Adds support for creating a constant time representation of scalars < NUM_BITS. * Ban zero ECDH keys, document non-zero requirements * Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519 * Add Ristretto eVRF trait impls * Support participating multiple times in the eVRF DKG * Only participate once per key, not once per key share * Rewrite processor key-gen around the eVRF DKG Still a WIP. * Finish routing the new key gen in the processor Doesn't touch the tests, coordinator, nor Substrate yet. `cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor` does pass. * Deduplicate and better document in processor key_gen * Update serai-processor tests to the new key gen * Correct amount of yx coefficients, get processor key gen test to pass * Add embedded elliptic curve keys to Substrate * Update processor key gen tests to the eVRF DKG * Have set_keys take signature_participants, not removed_participants Now no one is removed from the DKG. Only `t` people publish the key however. Uses a BitVec for an efficient encoding of the participants. * Update the coordinator binary for the new DKG This does not yet update any tests. * Add sensible Debug to key_gen::[Processor, Coordinator]Message * Have the DKG explicitly declare how to interpolate its shares Removes the hack for MuSig where we multiply keys by the inverse of their lagrange interpolation factor. * Replace Interpolation::None with Interpolation::Constant Allows the MuSig DKG to keep the secret share as the original private key, enabling deriving FROST nonces consistently regardless of the MuSig context. * Get coordinator tests to pass * Update spec to the new DKG * Get clippy to pass across the repo * cargo machete * Add an extra sleep to ensure expected ordering of `Participation`s * Update orchestration * Remove bad panic in coordinator It expected ConfirmationShare to be n-of-n, not t-of-n. * Improve documentation on functions * Update TX size limit We now no longer have to support the ridiculous case of having 49 DKG participations within a 101-of-150 DKG. It does remain quite high due to needing to _sign_ so many times. It'd may be optimal for parties with multiple key shares to independently send their preprocesses/shares (despite the overhead that'll cause with signatures and the transaction structure). * Correct error in the Processor spec document * Update a few comments in the validator-sets pallet * Send/Recv Participation one at a time Sending all, then attempting to receive all in an expected order, wasn't working even with notable delays between sending messages. This points to the mempool not working as expected... * Correct ThresholdKeys serialization in modular-frost test * Updating existing TX size limit test for the new DKG parameters * Increase time allowed for the DKG on the GH CI * Correct construction of signature_participants in serai-client tests Fault identified by akil. * Further contextualize DkgConfirmer by ValidatorSet Caught by a safety check we wouldn't reuse preprocesses across messages. That raises the question of we were prior reusing preprocesses (reusing keys)? Except that'd have caused a variety of signing failures (suggesting we had some staggered timing avoiding it in practice but yes, this was possible in theory). * Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests * Correct shimmed setting of a secq256k1 key * cargo fmt * Don't use `[0; 32]` for the embedded keys in the coordinator rotation test The key_gen function expects the random values already decided. * Big-endian secq256k1 scalars Also restores the prior, safer, Encryption::register function.
This commit is contained in:
20
crypto/evrf/circuit-abstraction/Cargo.toml
Normal file
20
crypto/evrf/circuit-abstraction/Cargo.toml
Normal file
@@ -0,0 +1,20 @@
|
||||
[package]
|
||||
name = "generalized-bulletproofs-circuit-abstraction"
|
||||
version = "0.1.0"
|
||||
description = "An abstraction for arithmetic circuits over Generalized Bulletproofs"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/circuit-abstraction"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
keywords = ["bulletproofs", "circuit"]
|
||||
edition = "2021"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
|
||||
|
||||
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
|
||||
|
||||
generalized-bulletproofs = { path = "../generalized-bulletproofs" }
|
||||
21
crypto/evrf/circuit-abstraction/LICENSE
Normal file
21
crypto/evrf/circuit-abstraction/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 Luke Parker
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
3
crypto/evrf/circuit-abstraction/README.md
Normal file
3
crypto/evrf/circuit-abstraction/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Generalized Bulletproofs Circuit Abstraction
|
||||
|
||||
A circuit abstraction around `generalized-bulletproofs`.
|
||||
39
crypto/evrf/circuit-abstraction/src/gadgets.rs
Normal file
39
crypto/evrf/circuit-abstraction/src/gadgets.rs
Normal file
@@ -0,0 +1,39 @@
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite};
|
||||
|
||||
use crate::*;
|
||||
|
||||
impl<C: Ciphersuite> Circuit<C> {
|
||||
/// Constrain two linear combinations to be equal.
|
||||
pub fn equality(&mut self, a: LinComb<C::F>, b: &LinComb<C::F>) {
|
||||
self.constrain_equal_to_zero(a - b);
|
||||
}
|
||||
|
||||
/// Calculate (and constrain) the inverse of a value.
|
||||
///
|
||||
/// A linear combination may optionally be passed as a constraint for the value being inverted.
|
||||
/// A reference to the inverted value and its inverse is returned.
|
||||
///
|
||||
/// May panic if any linear combinations reference non-existent terms, the witness isn't provided
|
||||
/// when proving/is provided when verifying, or if the witness is 0 (and accordingly doesn't have
|
||||
/// an inverse).
|
||||
pub fn inverse(
|
||||
&mut self,
|
||||
lincomb: Option<LinComb<C::F>>,
|
||||
witness: Option<C::F>,
|
||||
) -> (Variable, Variable) {
|
||||
let (l, r, o) = self.mul(lincomb, None, witness.map(|f| (f, f.invert().unwrap())));
|
||||
// The output of a value multiplied by its inverse is 1
|
||||
// Constrain `1 o - 1 = 0`
|
||||
self.constrain_equal_to_zero(LinComb::from(o).constant(-C::F::ONE));
|
||||
(l, r)
|
||||
}
|
||||
|
||||
/// Constrain two linear combinations as inequal.
|
||||
///
|
||||
/// May panic if any linear combinations reference non-existent terms.
|
||||
pub fn inequality(&mut self, a: LinComb<C::F>, b: &LinComb<C::F>, witness: Option<(C::F, C::F)>) {
|
||||
let l_constraint = a - b;
|
||||
// The existence of a multiplicative inverse means a-b != 0, which means a != b
|
||||
self.inverse(Some(l_constraint), witness.map(|(a, b)| a - b));
|
||||
}
|
||||
}
|
||||
192
crypto/evrf/circuit-abstraction/src/lib.rs
Normal file
192
crypto/evrf/circuit-abstraction/src/lib.rs
Normal file
@@ -0,0 +1,192 @@
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
#![doc = include_str!("../README.md")]
|
||||
#![deny(missing_docs)]
|
||||
#![allow(non_snake_case)]
|
||||
|
||||
use zeroize::{Zeroize, ZeroizeOnDrop};
|
||||
|
||||
use ciphersuite::{
|
||||
group::ff::{Field, PrimeField},
|
||||
Ciphersuite,
|
||||
};
|
||||
|
||||
use generalized_bulletproofs::{
|
||||
ScalarVector, PedersenCommitment, PedersenVectorCommitment, ProofGenerators,
|
||||
transcript::{Transcript as ProverTranscript, VerifierTranscript, Commitments},
|
||||
arithmetic_circuit_proof::{AcError, ArithmeticCircuitStatement, ArithmeticCircuitWitness},
|
||||
};
|
||||
pub use generalized_bulletproofs::arithmetic_circuit_proof::{Variable, LinComb};
|
||||
|
||||
mod gadgets;
|
||||
|
||||
/// A trait for the transcript, whether proving for verifying, as necessary for sampling
|
||||
/// challenges.
|
||||
pub trait Transcript {
|
||||
/// Sample a challenge from the transacript.
|
||||
///
|
||||
/// It is the caller's responsibility to have properly transcripted all variables prior to
|
||||
/// sampling this challenge.
|
||||
fn challenge<F: PrimeField>(&mut self) -> F;
|
||||
}
|
||||
impl Transcript for ProverTranscript {
|
||||
fn challenge<F: PrimeField>(&mut self) -> F {
|
||||
self.challenge()
|
||||
}
|
||||
}
|
||||
impl Transcript for VerifierTranscript<'_> {
|
||||
fn challenge<F: PrimeField>(&mut self) -> F {
|
||||
self.challenge()
|
||||
}
|
||||
}
|
||||
|
||||
/// The witness for the satisfaction of this circuit.
|
||||
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
|
||||
struct ProverData<C: Ciphersuite> {
|
||||
aL: Vec<C::F>,
|
||||
aR: Vec<C::F>,
|
||||
C: Vec<PedersenVectorCommitment<C>>,
|
||||
V: Vec<PedersenCommitment<C>>,
|
||||
}
|
||||
|
||||
/// A struct representing a circuit.
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Circuit<C: Ciphersuite> {
|
||||
muls: usize,
|
||||
// A series of linear combinations which must evaluate to 0.
|
||||
constraints: Vec<LinComb<C::F>>,
|
||||
prover: Option<ProverData<C>>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Circuit<C> {
|
||||
/// Returns the amount of multiplications used by this circuit.
|
||||
pub fn muls(&self) -> usize {
|
||||
self.muls
|
||||
}
|
||||
|
||||
/// Create an instance to prove satisfaction of a circuit with.
|
||||
// TODO: Take the transcript here
|
||||
#[allow(clippy::type_complexity)]
|
||||
pub fn prove(
|
||||
vector_commitments: Vec<PedersenVectorCommitment<C>>,
|
||||
commitments: Vec<PedersenCommitment<C>>,
|
||||
) -> Self {
|
||||
Self {
|
||||
muls: 0,
|
||||
constraints: vec![],
|
||||
prover: Some(ProverData { aL: vec![], aR: vec![], C: vector_commitments, V: commitments }),
|
||||
}
|
||||
}
|
||||
|
||||
/// Create an instance to verify a proof with.
|
||||
// TODO: Take the transcript here
|
||||
pub fn verify() -> Self {
|
||||
Self { muls: 0, constraints: vec![], prover: None }
|
||||
}
|
||||
|
||||
/// Evaluate a linear combination.
|
||||
///
|
||||
/// Yields WL aL + WR aR + WO aO + WCG CG + WCH CH + WV V + c.
|
||||
///
|
||||
/// May panic if the linear combination references non-existent terms.
|
||||
///
|
||||
/// Returns None if not a prover.
|
||||
pub fn eval(&self, lincomb: &LinComb<C::F>) -> Option<C::F> {
|
||||
self.prover.as_ref().map(|prover| {
|
||||
let mut res = lincomb.c();
|
||||
for (index, weight) in lincomb.WL() {
|
||||
res += prover.aL[*index] * weight;
|
||||
}
|
||||
for (index, weight) in lincomb.WR() {
|
||||
res += prover.aR[*index] * weight;
|
||||
}
|
||||
for (index, weight) in lincomb.WO() {
|
||||
res += prover.aL[*index] * prover.aR[*index] * weight;
|
||||
}
|
||||
for (WCG, C) in lincomb.WCG().iter().zip(&prover.C) {
|
||||
for (j, weight) in WCG {
|
||||
res += C.g_values[*j] * weight;
|
||||
}
|
||||
}
|
||||
for (WCH, C) in lincomb.WCH().iter().zip(&prover.C) {
|
||||
for (j, weight) in WCH {
|
||||
res += C.h_values[*j] * weight;
|
||||
}
|
||||
}
|
||||
for (index, weight) in lincomb.WV() {
|
||||
res += prover.V[*index].value * weight;
|
||||
}
|
||||
res
|
||||
})
|
||||
}
|
||||
|
||||
/// Multiply two values, optionally constrained, returning the constrainable left/right/out
|
||||
/// terms.
|
||||
///
|
||||
/// May panic if any linear combinations reference non-existent terms or if the witness isn't
|
||||
/// provided when proving/is provided when verifying.
|
||||
pub fn mul(
|
||||
&mut self,
|
||||
a: Option<LinComb<C::F>>,
|
||||
b: Option<LinComb<C::F>>,
|
||||
witness: Option<(C::F, C::F)>,
|
||||
) -> (Variable, Variable, Variable) {
|
||||
let l = Variable::aL(self.muls);
|
||||
let r = Variable::aR(self.muls);
|
||||
let o = Variable::aO(self.muls);
|
||||
self.muls += 1;
|
||||
|
||||
debug_assert_eq!(self.prover.is_some(), witness.is_some());
|
||||
if let Some(witness) = witness {
|
||||
let prover = self.prover.as_mut().unwrap();
|
||||
prover.aL.push(witness.0);
|
||||
prover.aR.push(witness.1);
|
||||
}
|
||||
|
||||
if let Some(a) = a {
|
||||
self.constrain_equal_to_zero(a.term(-C::F::ONE, l));
|
||||
}
|
||||
if let Some(b) = b {
|
||||
self.constrain_equal_to_zero(b.term(-C::F::ONE, r));
|
||||
}
|
||||
|
||||
(l, r, o)
|
||||
}
|
||||
|
||||
/// Constrain a linear combination to be equal to 0.
|
||||
///
|
||||
/// May panic if the linear combination references non-existent terms.
|
||||
pub fn constrain_equal_to_zero(&mut self, lincomb: LinComb<C::F>) {
|
||||
self.constraints.push(lincomb);
|
||||
}
|
||||
|
||||
/// Obtain the statement for this circuit.
|
||||
///
|
||||
/// If configured as the prover, the witness to use is also returned.
|
||||
#[allow(clippy::type_complexity)]
|
||||
pub fn statement(
|
||||
self,
|
||||
generators: ProofGenerators<'_, C>,
|
||||
commitments: Commitments<C>,
|
||||
) -> Result<(ArithmeticCircuitStatement<'_, C>, Option<ArithmeticCircuitWitness<C>>), AcError> {
|
||||
let statement = ArithmeticCircuitStatement::new(generators, self.constraints, commitments)?;
|
||||
|
||||
let witness = self
|
||||
.prover
|
||||
.map(|mut prover| {
|
||||
// We can't deconstruct the witness as it implements Drop (per ZeroizeOnDrop)
|
||||
// Accordingly, we take the values within it and move forward with those
|
||||
let mut aL = vec![];
|
||||
std::mem::swap(&mut prover.aL, &mut aL);
|
||||
let mut aR = vec![];
|
||||
std::mem::swap(&mut prover.aR, &mut aR);
|
||||
let mut C = vec![];
|
||||
std::mem::swap(&mut prover.C, &mut C);
|
||||
let mut V = vec![];
|
||||
std::mem::swap(&mut prover.V, &mut V);
|
||||
ArithmeticCircuitWitness::new(ScalarVector::from(aL), ScalarVector::from(aR), C, V)
|
||||
})
|
||||
.transpose()?;
|
||||
|
||||
Ok((statement, witness))
|
||||
}
|
||||
}
|
||||
34
crypto/evrf/divisors/Cargo.toml
Normal file
34
crypto/evrf/divisors/Cargo.toml
Normal file
@@ -0,0 +1,34 @@
|
||||
[package]
|
||||
name = "ec-divisors"
|
||||
version = "0.1.0"
|
||||
description = "A library for calculating elliptic curve divisors"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/divisors"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
keywords = ["ciphersuite", "ff", "group"]
|
||||
edition = "2021"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
rand_core = { version = "0.6", default-features = false }
|
||||
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
|
||||
|
||||
group = "0.13"
|
||||
|
||||
hex = { version = "0.4", optional = true }
|
||||
dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"], optional = true }
|
||||
pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"], optional = true }
|
||||
|
||||
[dev-dependencies]
|
||||
rand_core = { version = "0.6", features = ["getrandom"] }
|
||||
|
||||
hex = "0.4"
|
||||
dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"] }
|
||||
pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"] }
|
||||
|
||||
[features]
|
||||
ed25519 = ["hex", "dalek-ff-group"]
|
||||
pasta = ["pasta_curves"]
|
||||
21
crypto/evrf/divisors/LICENSE
Normal file
21
crypto/evrf/divisors/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023-2024 Luke Parker
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
4
crypto/evrf/divisors/README.md
Normal file
4
crypto/evrf/divisors/README.md
Normal file
@@ -0,0 +1,4 @@
|
||||
# Elliptic Curve Divisors
|
||||
|
||||
An implementation of a representation for and construction of elliptic curve
|
||||
divisors, intended for Eagen's [EC IP work](https://eprint.iacr.org/2022/596).
|
||||
287
crypto/evrf/divisors/src/lib.rs
Normal file
287
crypto/evrf/divisors/src/lib.rs
Normal file
@@ -0,0 +1,287 @@
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
#![doc = include_str!("../README.md")]
|
||||
#![deny(missing_docs)]
|
||||
#![allow(non_snake_case)]
|
||||
|
||||
use group::{
|
||||
ff::{Field, PrimeField},
|
||||
Group,
|
||||
};
|
||||
|
||||
mod poly;
|
||||
pub use poly::*;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
/// A curve usable with this library.
|
||||
pub trait DivisorCurve: Group {
|
||||
/// An element of the field this curve is defined over.
|
||||
type FieldElement: PrimeField;
|
||||
|
||||
/// The A in the curve equation y^2 = x^3 + A x + B.
|
||||
fn a() -> Self::FieldElement;
|
||||
/// The B in the curve equation y^2 = x^3 + A x + B.
|
||||
fn b() -> Self::FieldElement;
|
||||
|
||||
/// y^2 - x^3 - A x - B
|
||||
///
|
||||
/// Section 2 of the security proofs define this modulus.
|
||||
///
|
||||
/// This MUST NOT be overriden.
|
||||
// TODO: Move to an extension trait
|
||||
fn divisor_modulus() -> Poly<Self::FieldElement> {
|
||||
Poly {
|
||||
// 0 y**1, 1 y*2
|
||||
y_coefficients: vec![Self::FieldElement::ZERO, Self::FieldElement::ONE],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![
|
||||
// - A x
|
||||
-Self::a(),
|
||||
// 0 x^2
|
||||
Self::FieldElement::ZERO,
|
||||
// - x^3
|
||||
-Self::FieldElement::ONE,
|
||||
],
|
||||
// - B
|
||||
zero_coefficient: -Self::b(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert a point to its x and y coordinates.
|
||||
///
|
||||
/// Returns None if passed the point at infinity.
|
||||
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)>;
|
||||
}
|
||||
|
||||
/// Calculate the slope and intercept between two points.
|
||||
///
|
||||
/// This function panics when `a @ infinity`, `b @ infinity`, `a == b`, or when `a == -b`.
|
||||
pub(crate) fn slope_intercept<C: DivisorCurve>(a: C, b: C) -> (C::FieldElement, C::FieldElement) {
|
||||
let (ax, ay) = C::to_xy(a).unwrap();
|
||||
debug_assert_eq!(C::divisor_modulus().eval(ax, ay), C::FieldElement::ZERO);
|
||||
let (bx, by) = C::to_xy(b).unwrap();
|
||||
debug_assert_eq!(C::divisor_modulus().eval(bx, by), C::FieldElement::ZERO);
|
||||
let slope = (by - ay) *
|
||||
Option::<C::FieldElement>::from((bx - ax).invert())
|
||||
.expect("trying to get slope/intercept of points sharing an x coordinate");
|
||||
let intercept = by - (slope * bx);
|
||||
debug_assert!(bool::from((ay - (slope * ax) - intercept).is_zero()));
|
||||
debug_assert!(bool::from((by - (slope * bx) - intercept).is_zero()));
|
||||
(slope, intercept)
|
||||
}
|
||||
|
||||
// The line interpolating two points.
|
||||
fn line<C: DivisorCurve>(a: C, mut b: C) -> Poly<C::FieldElement> {
|
||||
// If they're both the point at infinity, we simply set the line to one
|
||||
if bool::from(a.is_identity() & b.is_identity()) {
|
||||
return Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![],
|
||||
zero_coefficient: C::FieldElement::ONE,
|
||||
};
|
||||
}
|
||||
|
||||
// If either point is the point at infinity, or these are additive inverses, the line is
|
||||
// `1 * x - x`. The first `x` is a term in the polynomial, the `x` is the `x` coordinate of these
|
||||
// points (of which there is one, as the second point is either at infinity or has a matching `x`
|
||||
// coordinate).
|
||||
if bool::from(a.is_identity() | b.is_identity()) || (a == -b) {
|
||||
let (x, _) = C::to_xy(if !bool::from(a.is_identity()) { a } else { b }).unwrap();
|
||||
return Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![C::FieldElement::ONE],
|
||||
zero_coefficient: -x,
|
||||
};
|
||||
}
|
||||
|
||||
// If the points are equal, we use the line interpolating the sum of these points with the point
|
||||
// at infinity
|
||||
if a == b {
|
||||
b = -a.double();
|
||||
}
|
||||
|
||||
let (slope, intercept) = slope_intercept::<C>(a, b);
|
||||
|
||||
// Section 4 of the proofs explicitly state the line `L = y - lambda * x - mu`
|
||||
// y - (slope * x) - intercept
|
||||
Poly {
|
||||
y_coefficients: vec![C::FieldElement::ONE],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![-slope],
|
||||
zero_coefficient: -intercept,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a divisor interpolating the following points.
|
||||
///
|
||||
/// Returns None if:
|
||||
/// - No points were passed in
|
||||
/// - The points don't sum to the point at infinity
|
||||
/// - A passed in point was the point at infinity
|
||||
#[allow(clippy::new_ret_no_self)]
|
||||
pub fn new_divisor<C: DivisorCurve>(points: &[C]) -> Option<Poly<C::FieldElement>> {
|
||||
// A single point is either the point at infinity, or this doesn't sum to the point at infinity
|
||||
// Both cause us to return None
|
||||
if points.len() < 2 {
|
||||
None?;
|
||||
}
|
||||
if points.iter().sum::<C>() != C::identity() {
|
||||
None?;
|
||||
}
|
||||
|
||||
// Create the initial set of divisors
|
||||
let mut divs = vec![];
|
||||
let mut iter = points.iter().copied();
|
||||
while let Some(a) = iter.next() {
|
||||
if a == C::identity() {
|
||||
None?;
|
||||
}
|
||||
|
||||
let b = iter.next();
|
||||
if b == Some(C::identity()) {
|
||||
None?;
|
||||
}
|
||||
|
||||
// Draw the line between those points
|
||||
divs.push((a + b.unwrap_or(C::identity()), line::<C>(a, b.unwrap_or(-a))));
|
||||
}
|
||||
|
||||
let modulus = C::divisor_modulus();
|
||||
|
||||
// Pair them off until only one remains
|
||||
while divs.len() > 1 {
|
||||
let mut next_divs = vec![];
|
||||
// If there's an odd amount of divisors, carry the odd one out to the next iteration
|
||||
if (divs.len() % 2) == 1 {
|
||||
next_divs.push(divs.pop().unwrap());
|
||||
}
|
||||
|
||||
while let Some((a, a_div)) = divs.pop() {
|
||||
let (b, b_div) = divs.pop().unwrap();
|
||||
|
||||
// Merge the two divisors
|
||||
let numerator = a_div.mul_mod(b_div, &modulus).mul_mod(line::<C>(a, b), &modulus);
|
||||
let denominator = line::<C>(a, -a).mul_mod(line::<C>(b, -b), &modulus);
|
||||
let (q, r) = numerator.div_rem(&denominator);
|
||||
assert_eq!(r, Poly::zero());
|
||||
|
||||
next_divs.push((a + b, q));
|
||||
}
|
||||
|
||||
divs = next_divs;
|
||||
}
|
||||
|
||||
// Return the unified divisor
|
||||
Some(divs.remove(0).1)
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "pasta"))]
|
||||
mod pasta {
|
||||
use group::{ff::Field, Curve};
|
||||
use pasta_curves::{
|
||||
arithmetic::{Coordinates, CurveAffine},
|
||||
Ep, Fp, Eq, Fq,
|
||||
};
|
||||
use crate::DivisorCurve;
|
||||
|
||||
impl DivisorCurve for Ep {
|
||||
type FieldElement = Fp;
|
||||
|
||||
fn a() -> Self::FieldElement {
|
||||
Self::FieldElement::ZERO
|
||||
}
|
||||
fn b() -> Self::FieldElement {
|
||||
Self::FieldElement::from(5u64)
|
||||
}
|
||||
|
||||
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
|
||||
Option::<Coordinates<_>>::from(point.to_affine().coordinates())
|
||||
.map(|coords| (*coords.x(), *coords.y()))
|
||||
}
|
||||
}
|
||||
|
||||
impl DivisorCurve for Eq {
|
||||
type FieldElement = Fq;
|
||||
|
||||
fn a() -> Self::FieldElement {
|
||||
Self::FieldElement::ZERO
|
||||
}
|
||||
fn b() -> Self::FieldElement {
|
||||
Self::FieldElement::from(5u64)
|
||||
}
|
||||
|
||||
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
|
||||
Option::<Coordinates<_>>::from(point.to_affine().coordinates())
|
||||
.map(|coords| (*coords.x(), *coords.y()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "ed25519"))]
|
||||
mod ed25519 {
|
||||
use group::{
|
||||
ff::{Field, PrimeField},
|
||||
Group, GroupEncoding,
|
||||
};
|
||||
use dalek_ff_group::{FieldElement, EdwardsPoint};
|
||||
|
||||
impl crate::DivisorCurve for EdwardsPoint {
|
||||
type FieldElement = FieldElement;
|
||||
|
||||
// Wei25519 a/b
|
||||
// https://www.ietf.org/archive/id/draft-ietf-lwig-curve-representations-02.pdf E.3
|
||||
fn a() -> Self::FieldElement {
|
||||
let mut be_bytes =
|
||||
hex::decode("2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144").unwrap();
|
||||
be_bytes.reverse();
|
||||
let le_bytes = be_bytes;
|
||||
Self::FieldElement::from_repr(le_bytes.try_into().unwrap()).unwrap()
|
||||
}
|
||||
fn b() -> Self::FieldElement {
|
||||
let mut be_bytes =
|
||||
hex::decode("7b425ed097b425ed097b425ed097b425ed097b425ed097b4260b5e9c7710c864").unwrap();
|
||||
be_bytes.reverse();
|
||||
let le_bytes = be_bytes;
|
||||
|
||||
Self::FieldElement::from_repr(le_bytes.try_into().unwrap()).unwrap()
|
||||
}
|
||||
|
||||
// https://www.ietf.org/archive/id/draft-ietf-lwig-curve-representations-02.pdf E.2
|
||||
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
|
||||
if bool::from(point.is_identity()) {
|
||||
None?;
|
||||
}
|
||||
|
||||
// Extract the y coordinate from the compressed point
|
||||
let mut edwards_y = point.to_bytes();
|
||||
let x_is_odd = edwards_y[31] >> 7;
|
||||
edwards_y[31] &= (1 << 7) - 1;
|
||||
let edwards_y = Self::FieldElement::from_repr(edwards_y).unwrap();
|
||||
|
||||
// Recover the x coordinate
|
||||
let edwards_y_sq = edwards_y * edwards_y;
|
||||
let D = -Self::FieldElement::from(121665u64) *
|
||||
Self::FieldElement::from(121666u64).invert().unwrap();
|
||||
let mut edwards_x = ((edwards_y_sq - Self::FieldElement::ONE) *
|
||||
((D * edwards_y_sq) + Self::FieldElement::ONE).invert().unwrap())
|
||||
.sqrt()
|
||||
.unwrap();
|
||||
if u8::from(bool::from(edwards_x.is_odd())) != x_is_odd {
|
||||
edwards_x = -edwards_x;
|
||||
}
|
||||
|
||||
// Calculate the x and y coordinates for Wei25519
|
||||
let edwards_y_plus_one = Self::FieldElement::ONE + edwards_y;
|
||||
let one_minus_edwards_y = Self::FieldElement::ONE - edwards_y;
|
||||
let wei_x = (edwards_y_plus_one * one_minus_edwards_y.invert().unwrap()) +
|
||||
(Self::FieldElement::from(486662u64) * Self::FieldElement::from(3u64).invert().unwrap());
|
||||
let c =
|
||||
(-(Self::FieldElement::from(486662u64) + Self::FieldElement::from(2u64))).sqrt().unwrap();
|
||||
let wei_y = c * edwards_y_plus_one * (one_minus_edwards_y * edwards_x).invert().unwrap();
|
||||
Some((wei_x, wei_y))
|
||||
}
|
||||
}
|
||||
}
|
||||
430
crypto/evrf/divisors/src/poly.rs
Normal file
430
crypto/evrf/divisors/src/poly.rs
Normal file
@@ -0,0 +1,430 @@
|
||||
use core::ops::{Add, Neg, Sub, Mul, Rem};
|
||||
|
||||
use zeroize::Zeroize;
|
||||
|
||||
use group::ff::PrimeField;
|
||||
|
||||
/// A structure representing a Polynomial with x**i, y**i, and y**i * x**j terms.
|
||||
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub struct Poly<F: PrimeField + From<u64>> {
|
||||
/// c[i] * y ** (i + 1)
|
||||
pub y_coefficients: Vec<F>,
|
||||
/// c[i][j] * y ** (i + 1) x ** (j + 1)
|
||||
pub yx_coefficients: Vec<Vec<F>>,
|
||||
/// c[i] * x ** (i + 1)
|
||||
pub x_coefficients: Vec<F>,
|
||||
/// Coefficient for x ** 0, y ** 0, and x ** 0 y ** 0 (the coefficient for 1)
|
||||
pub zero_coefficient: F,
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Poly<F> {
|
||||
/// A polynomial for zero.
|
||||
pub fn zero() -> Self {
|
||||
Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![],
|
||||
zero_coefficient: F::ZERO,
|
||||
}
|
||||
}
|
||||
|
||||
/// The amount of terms in the polynomial.
|
||||
#[allow(clippy::len_without_is_empty)]
|
||||
#[must_use]
|
||||
pub fn len(&self) -> usize {
|
||||
self.y_coefficients.len() +
|
||||
self.yx_coefficients.iter().map(Vec::len).sum::<usize>() +
|
||||
self.x_coefficients.len() +
|
||||
usize::from(u8::from(self.zero_coefficient != F::ZERO))
|
||||
}
|
||||
|
||||
// Remove high-order zero terms, allowing the length of the vectors to equal the amount of terms.
|
||||
pub(crate) fn tidy(&mut self) {
|
||||
let tidy = |vec: &mut Vec<F>| {
|
||||
while vec.last() == Some(&F::ZERO) {
|
||||
vec.pop();
|
||||
}
|
||||
};
|
||||
|
||||
tidy(&mut self.y_coefficients);
|
||||
for vec in self.yx_coefficients.iter_mut() {
|
||||
tidy(vec);
|
||||
}
|
||||
while self.yx_coefficients.last() == Some(&vec![]) {
|
||||
self.yx_coefficients.pop();
|
||||
}
|
||||
tidy(&mut self.x_coefficients);
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Add<&Self> for Poly<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn add(mut self, other: &Self) -> Self {
|
||||
// Expand to be the neeeded size
|
||||
while self.y_coefficients.len() < other.y_coefficients.len() {
|
||||
self.y_coefficients.push(F::ZERO);
|
||||
}
|
||||
while self.yx_coefficients.len() < other.yx_coefficients.len() {
|
||||
self.yx_coefficients.push(vec![]);
|
||||
}
|
||||
for i in 0 .. other.yx_coefficients.len() {
|
||||
while self.yx_coefficients[i].len() < other.yx_coefficients[i].len() {
|
||||
self.yx_coefficients[i].push(F::ZERO);
|
||||
}
|
||||
}
|
||||
while self.x_coefficients.len() < other.x_coefficients.len() {
|
||||
self.x_coefficients.push(F::ZERO);
|
||||
}
|
||||
|
||||
// Perform the addition
|
||||
for (i, coeff) in other.y_coefficients.iter().enumerate() {
|
||||
self.y_coefficients[i] += coeff;
|
||||
}
|
||||
for (i, coeffs) in other.yx_coefficients.iter().enumerate() {
|
||||
for (j, coeff) in coeffs.iter().enumerate() {
|
||||
self.yx_coefficients[i][j] += coeff;
|
||||
}
|
||||
}
|
||||
for (i, coeff) in other.x_coefficients.iter().enumerate() {
|
||||
self.x_coefficients[i] += coeff;
|
||||
}
|
||||
self.zero_coefficient += other.zero_coefficient;
|
||||
|
||||
self.tidy();
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Neg for Poly<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn neg(mut self) -> Self {
|
||||
for y_coeff in self.y_coefficients.iter_mut() {
|
||||
*y_coeff = -*y_coeff;
|
||||
}
|
||||
for yx_coeffs in self.yx_coefficients.iter_mut() {
|
||||
for yx_coeff in yx_coeffs.iter_mut() {
|
||||
*yx_coeff = -*yx_coeff;
|
||||
}
|
||||
}
|
||||
for x_coeff in self.x_coefficients.iter_mut() {
|
||||
*x_coeff = -*x_coeff;
|
||||
}
|
||||
self.zero_coefficient = -self.zero_coefficient;
|
||||
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Sub for Poly<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn sub(self, other: Self) -> Self {
|
||||
self + &-other
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Mul<F> for Poly<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn mul(mut self, scalar: F) -> Self {
|
||||
if scalar == F::ZERO {
|
||||
return Poly::zero();
|
||||
}
|
||||
|
||||
for y_coeff in self.y_coefficients.iter_mut() {
|
||||
*y_coeff *= scalar;
|
||||
}
|
||||
for coeffs in self.yx_coefficients.iter_mut() {
|
||||
for coeff in coeffs.iter_mut() {
|
||||
*coeff *= scalar;
|
||||
}
|
||||
}
|
||||
for x_coeff in self.x_coefficients.iter_mut() {
|
||||
*x_coeff *= scalar;
|
||||
}
|
||||
self.zero_coefficient *= scalar;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Poly<F> {
|
||||
#[must_use]
|
||||
fn shift_by_x(mut self, power_of_x: usize) -> Self {
|
||||
if power_of_x == 0 {
|
||||
return self;
|
||||
}
|
||||
|
||||
// Shift up every x coefficient
|
||||
for _ in 0 .. power_of_x {
|
||||
self.x_coefficients.insert(0, F::ZERO);
|
||||
for yx_coeffs in &mut self.yx_coefficients {
|
||||
yx_coeffs.insert(0, F::ZERO);
|
||||
}
|
||||
}
|
||||
|
||||
// Move the zero coefficient
|
||||
self.x_coefficients[power_of_x - 1] = self.zero_coefficient;
|
||||
self.zero_coefficient = F::ZERO;
|
||||
|
||||
// Move the y coefficients
|
||||
// Start by creating yx coefficients with the necessary powers of x
|
||||
let mut yx_coefficients_to_push = vec![];
|
||||
while yx_coefficients_to_push.len() < power_of_x {
|
||||
yx_coefficients_to_push.push(F::ZERO);
|
||||
}
|
||||
// Now, ensure the yx coefficients has the slots for the y coefficients we're moving
|
||||
while self.yx_coefficients.len() < self.y_coefficients.len() {
|
||||
self.yx_coefficients.push(yx_coefficients_to_push.clone());
|
||||
}
|
||||
// Perform the move
|
||||
for (i, y_coeff) in self.y_coefficients.drain(..).enumerate() {
|
||||
self.yx_coefficients[i][power_of_x - 1] = y_coeff;
|
||||
}
|
||||
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
fn shift_by_y(mut self, power_of_y: usize) -> Self {
|
||||
if power_of_y == 0 {
|
||||
return self;
|
||||
}
|
||||
|
||||
// Shift up every y coefficient
|
||||
for _ in 0 .. power_of_y {
|
||||
self.y_coefficients.insert(0, F::ZERO);
|
||||
self.yx_coefficients.insert(0, vec![]);
|
||||
}
|
||||
|
||||
// Move the zero coefficient
|
||||
self.y_coefficients[power_of_y - 1] = self.zero_coefficient;
|
||||
self.zero_coefficient = F::ZERO;
|
||||
|
||||
// Move the x coefficients
|
||||
self.yx_coefficients[power_of_y - 1] = self.x_coefficients;
|
||||
self.x_coefficients = vec![];
|
||||
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Mul for Poly<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn mul(self, other: Self) -> Self {
|
||||
let mut res = self.clone() * other.zero_coefficient;
|
||||
|
||||
for (i, y_coeff) in other.y_coefficients.iter().enumerate() {
|
||||
let scaled = self.clone() * *y_coeff;
|
||||
res = res + &scaled.shift_by_y(i + 1);
|
||||
}
|
||||
|
||||
for (y_i, yx_coeffs) in other.yx_coefficients.iter().enumerate() {
|
||||
for (x_i, yx_coeff) in yx_coeffs.iter().enumerate() {
|
||||
let scaled = self.clone() * *yx_coeff;
|
||||
res = res + &scaled.shift_by_y(y_i + 1).shift_by_x(x_i + 1);
|
||||
}
|
||||
}
|
||||
|
||||
for (i, x_coeff) in other.x_coefficients.iter().enumerate() {
|
||||
let scaled = self.clone() * *x_coeff;
|
||||
res = res + &scaled.shift_by_x(i + 1);
|
||||
}
|
||||
|
||||
res.tidy();
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Poly<F> {
|
||||
/// Perform multiplication mod `modulus`.
|
||||
#[must_use]
|
||||
pub fn mul_mod(self, other: Self, modulus: &Self) -> Self {
|
||||
((self % modulus) * (other % modulus)) % modulus
|
||||
}
|
||||
|
||||
/// Perform division, returning the result and remainder.
|
||||
///
|
||||
/// Panics upon division by zero, with undefined behavior if a non-tidy divisor is used.
|
||||
#[must_use]
|
||||
pub fn div_rem(self, divisor: &Self) -> (Self, Self) {
|
||||
// The leading y coefficient and associated x coefficient.
|
||||
let leading_y = |poly: &Self| -> (_, _) {
|
||||
if poly.y_coefficients.len() > poly.yx_coefficients.len() {
|
||||
(poly.y_coefficients.len(), 0)
|
||||
} else if !poly.yx_coefficients.is_empty() {
|
||||
(poly.yx_coefficients.len(), poly.yx_coefficients.last().unwrap().len())
|
||||
} else {
|
||||
(0, poly.x_coefficients.len())
|
||||
}
|
||||
};
|
||||
|
||||
let (div_y, div_x) = leading_y(divisor);
|
||||
// If this divisor is actually a scalar, don't perform long division
|
||||
if (div_y == 0) && (div_x == 0) {
|
||||
return (self * divisor.zero_coefficient.invert().unwrap(), Poly::zero());
|
||||
}
|
||||
|
||||
// Remove leading terms until the value is less than the divisor
|
||||
let mut quotient: Poly<F> = Poly::zero();
|
||||
let mut remainder = self.clone();
|
||||
loop {
|
||||
// If there's nothing left to divide, return
|
||||
if remainder == Poly::zero() {
|
||||
break;
|
||||
}
|
||||
|
||||
let (rem_y, rem_x) = leading_y(&remainder);
|
||||
if (rem_y < div_y) || (rem_x < div_x) {
|
||||
break;
|
||||
}
|
||||
|
||||
let get = |poly: &Poly<F>, y_pow: usize, x_pow: usize| -> F {
|
||||
if (y_pow == 0) && (x_pow == 0) {
|
||||
poly.zero_coefficient
|
||||
} else if x_pow == 0 {
|
||||
poly.y_coefficients[y_pow - 1]
|
||||
} else if y_pow == 0 {
|
||||
poly.x_coefficients[x_pow - 1]
|
||||
} else {
|
||||
poly.yx_coefficients[y_pow - 1][x_pow - 1]
|
||||
}
|
||||
};
|
||||
let coeff_numerator = get(&remainder, rem_y, rem_x);
|
||||
let coeff_denominator = get(divisor, div_y, div_x);
|
||||
|
||||
// We want coeff_denominator scaled by x to equal coeff_numerator
|
||||
// x * d = n
|
||||
// n / d = x
|
||||
let mut quotient_term = Poly::zero();
|
||||
// Because this is the coefficient for the leading term of a tidied polynomial, it must be
|
||||
// non-zero
|
||||
quotient_term.zero_coefficient = coeff_numerator * coeff_denominator.invert().unwrap();
|
||||
|
||||
// Add the necessary yx powers
|
||||
let delta_y = rem_y - div_y;
|
||||
let delta_x = rem_x - div_x;
|
||||
let quotient_term = quotient_term.shift_by_y(delta_y).shift_by_x(delta_x);
|
||||
|
||||
let to_remove = quotient_term.clone() * divisor.clone();
|
||||
debug_assert_eq!(get(&to_remove, rem_y, rem_x), coeff_numerator);
|
||||
|
||||
remainder = remainder - to_remove;
|
||||
quotient = quotient + "ient_term;
|
||||
}
|
||||
debug_assert_eq!((quotient.clone() * divisor.clone()) + &remainder, self);
|
||||
|
||||
(quotient, remainder)
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Rem<&Self> for Poly<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn rem(self, modulus: &Self) -> Self {
|
||||
self.div_rem(modulus).1
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField + From<u64>> Poly<F> {
|
||||
/// Evaluate this polynomial with the specified x/y values.
|
||||
///
|
||||
/// Panics on polynomials with terms whose powers exceed 2**64.
|
||||
#[must_use]
|
||||
pub fn eval(&self, x: F, y: F) -> F {
|
||||
let mut res = self.zero_coefficient;
|
||||
for (pow, coeff) in
|
||||
self.y_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
|
||||
{
|
||||
res += y.pow([pow]) * coeff;
|
||||
}
|
||||
for (y_pow, coeffs) in
|
||||
self.yx_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
|
||||
{
|
||||
let y_pow = y.pow([y_pow]);
|
||||
for (x_pow, coeff) in
|
||||
coeffs.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
|
||||
{
|
||||
res += y_pow * x.pow([x_pow]) * coeff;
|
||||
}
|
||||
}
|
||||
for (pow, coeff) in
|
||||
self.x_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
|
||||
{
|
||||
res += x.pow([pow]) * coeff;
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
/// Differentiate a polynomial, reduced by a modulus with a leading y term y**2 x**0, by x and y.
|
||||
///
|
||||
/// This function panics if a y**2 term is present within the polynomial.
|
||||
#[must_use]
|
||||
pub fn differentiate(&self) -> (Poly<F>, Poly<F>) {
|
||||
assert!(self.y_coefficients.len() <= 1);
|
||||
assert!(self.yx_coefficients.len() <= 1);
|
||||
|
||||
// Differentation by x practically involves:
|
||||
// - Dropping everything without an x component
|
||||
// - Shifting everything down a power of x
|
||||
// - Multiplying the new coefficient by the power it prior was used with
|
||||
let diff_x = {
|
||||
let mut diff_x = Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![],
|
||||
zero_coefficient: F::ZERO,
|
||||
};
|
||||
if !self.x_coefficients.is_empty() {
|
||||
let mut x_coeffs = self.x_coefficients.clone();
|
||||
diff_x.zero_coefficient = x_coeffs.remove(0);
|
||||
diff_x.x_coefficients = x_coeffs;
|
||||
|
||||
let mut prior_x_power = F::from(2);
|
||||
for x_coeff in &mut diff_x.x_coefficients {
|
||||
*x_coeff *= prior_x_power;
|
||||
prior_x_power += F::ONE;
|
||||
}
|
||||
}
|
||||
|
||||
if !self.yx_coefficients.is_empty() {
|
||||
let mut yx_coeffs = self.yx_coefficients[0].clone();
|
||||
diff_x.y_coefficients = vec![yx_coeffs.remove(0)];
|
||||
diff_x.yx_coefficients = vec![yx_coeffs];
|
||||
|
||||
let mut prior_x_power = F::from(2);
|
||||
for yx_coeff in &mut diff_x.yx_coefficients[0] {
|
||||
*yx_coeff *= prior_x_power;
|
||||
prior_x_power += F::ONE;
|
||||
}
|
||||
}
|
||||
|
||||
diff_x.tidy();
|
||||
diff_x
|
||||
};
|
||||
|
||||
// Differentation by y is trivial
|
||||
// It's the y coefficient as the zero coefficient, and the yx coefficients as the x
|
||||
// coefficients
|
||||
// This is thanks to any y term over y^2 being reduced out
|
||||
let diff_y = Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: self.yx_coefficients.first().cloned().unwrap_or(vec![]),
|
||||
zero_coefficient: self.y_coefficients.first().cloned().unwrap_or(F::ZERO),
|
||||
};
|
||||
|
||||
(diff_x, diff_y)
|
||||
}
|
||||
|
||||
/// Normalize the x coefficient to 1.
|
||||
///
|
||||
/// Panics if there is no x coefficient to normalize or if it cannot be normalized to 1.
|
||||
#[must_use]
|
||||
pub fn normalize_x_coefficient(self) -> Self {
|
||||
let scalar = self.x_coefficients[0].invert().unwrap();
|
||||
self * scalar
|
||||
}
|
||||
}
|
||||
235
crypto/evrf/divisors/src/tests/mod.rs
Normal file
235
crypto/evrf/divisors/src/tests/mod.rs
Normal file
@@ -0,0 +1,235 @@
|
||||
use rand_core::OsRng;
|
||||
|
||||
use group::{ff::Field, Group};
|
||||
use dalek_ff_group::EdwardsPoint;
|
||||
use pasta_curves::{Ep, Eq};
|
||||
|
||||
use crate::{DivisorCurve, Poly, new_divisor};
|
||||
|
||||
// Equation 4 in the security proofs
|
||||
fn check_divisor<C: DivisorCurve>(points: Vec<C>) {
|
||||
// Create the divisor
|
||||
let divisor = new_divisor::<C>(&points).unwrap();
|
||||
let eval = |c| {
|
||||
let (x, y) = C::to_xy(c).unwrap();
|
||||
divisor.eval(x, y)
|
||||
};
|
||||
|
||||
// Decide challgenges
|
||||
let c0 = C::random(&mut OsRng);
|
||||
let c1 = C::random(&mut OsRng);
|
||||
let c2 = -(c0 + c1);
|
||||
let (slope, intercept) = crate::slope_intercept::<C>(c0, c1);
|
||||
|
||||
let mut rhs = <C as DivisorCurve>::FieldElement::ONE;
|
||||
for point in points {
|
||||
let (x, y) = C::to_xy(point).unwrap();
|
||||
rhs *= intercept - (y - (slope * x));
|
||||
}
|
||||
assert_eq!(eval(c0) * eval(c1) * eval(c2), rhs);
|
||||
}
|
||||
|
||||
fn test_divisor<C: DivisorCurve>() {
|
||||
for i in 1 ..= 255 {
|
||||
println!("Test iteration {i}");
|
||||
|
||||
// Select points
|
||||
let mut points = vec![];
|
||||
for _ in 0 .. i {
|
||||
points.push(C::random(&mut OsRng));
|
||||
}
|
||||
points.push(-points.iter().sum::<C>());
|
||||
println!("Points {}", points.len());
|
||||
|
||||
// Perform the original check
|
||||
check_divisor(points.clone());
|
||||
|
||||
// Create the divisor
|
||||
let divisor = new_divisor::<C>(&points).unwrap();
|
||||
|
||||
// For a divisor interpolating 256 points, as one does when interpreting a 255-bit discrete log
|
||||
// with the result of its scalar multiplication against a fixed generator, the lengths of the
|
||||
// yx/x coefficients shouldn't supersede the following bounds
|
||||
assert!((divisor.yx_coefficients.first().unwrap_or(&vec![]).len()) <= 126);
|
||||
assert!((divisor.x_coefficients.len() - 1) <= 127);
|
||||
assert!(
|
||||
(1 + divisor.yx_coefficients.first().unwrap_or(&vec![]).len() +
|
||||
(divisor.x_coefficients.len() - 1) +
|
||||
1) <=
|
||||
255
|
||||
);
|
||||
|
||||
// Decide challgenges
|
||||
let c0 = C::random(&mut OsRng);
|
||||
let c1 = C::random(&mut OsRng);
|
||||
let c2 = -(c0 + c1);
|
||||
let (slope, intercept) = crate::slope_intercept::<C>(c0, c1);
|
||||
|
||||
// Perform the Logarithmic derivative check
|
||||
{
|
||||
let dx_over_dz = {
|
||||
let dx = Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![C::FieldElement::ZERO, C::FieldElement::from(3)],
|
||||
zero_coefficient: C::a(),
|
||||
};
|
||||
|
||||
let dy = Poly {
|
||||
y_coefficients: vec![C::FieldElement::from(2)],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![],
|
||||
zero_coefficient: C::FieldElement::ZERO,
|
||||
};
|
||||
|
||||
let dz = (dy.clone() * -slope) + &dx;
|
||||
|
||||
// We want dx/dz, and dz/dx is equal to dy/dx - slope
|
||||
// Sagemath claims this, dy / dz, is the proper inverse
|
||||
(dy, dz)
|
||||
};
|
||||
|
||||
{
|
||||
let sanity_eval = |c| {
|
||||
let (x, y) = C::to_xy(c).unwrap();
|
||||
dx_over_dz.0.eval(x, y) * dx_over_dz.1.eval(x, y).invert().unwrap()
|
||||
};
|
||||
let sanity = sanity_eval(c0) + sanity_eval(c1) + sanity_eval(c2);
|
||||
// This verifies the dx/dz polynomial is correct
|
||||
assert_eq!(sanity, C::FieldElement::ZERO);
|
||||
}
|
||||
|
||||
// Logarithmic derivative check
|
||||
let test = |divisor: Poly<_>| {
|
||||
let (dx, dy) = divisor.differentiate();
|
||||
|
||||
let lhs = |c| {
|
||||
let (x, y) = C::to_xy(c).unwrap();
|
||||
|
||||
let n_0 = (C::FieldElement::from(3) * (x * x)) + C::a();
|
||||
let d_0 = (C::FieldElement::from(2) * y).invert().unwrap();
|
||||
let p_0_n_0 = n_0 * d_0;
|
||||
|
||||
let n_1 = dy.eval(x, y);
|
||||
let first = p_0_n_0 * n_1;
|
||||
|
||||
let second = dx.eval(x, y);
|
||||
|
||||
let d_1 = divisor.eval(x, y);
|
||||
|
||||
let fraction_1_n = first + second;
|
||||
let fraction_1_d = d_1;
|
||||
|
||||
let fraction_2_n = dx_over_dz.0.eval(x, y);
|
||||
let fraction_2_d = dx_over_dz.1.eval(x, y);
|
||||
|
||||
fraction_1_n * fraction_2_n * (fraction_1_d * fraction_2_d).invert().unwrap()
|
||||
};
|
||||
let lhs = lhs(c0) + lhs(c1) + lhs(c2);
|
||||
|
||||
let mut rhs = C::FieldElement::ZERO;
|
||||
for point in &points {
|
||||
let (x, y) = <C as DivisorCurve>::to_xy(*point).unwrap();
|
||||
rhs += (intercept - (y - (slope * x))).invert().unwrap();
|
||||
}
|
||||
|
||||
assert_eq!(lhs, rhs);
|
||||
};
|
||||
// Test the divisor and the divisor with a normalized x coefficient
|
||||
test(divisor.clone());
|
||||
test(divisor.normalize_x_coefficient());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn test_same_point<C: DivisorCurve>() {
|
||||
let mut points = vec![C::random(&mut OsRng)];
|
||||
points.push(points[0]);
|
||||
points.push(-points.iter().sum::<C>());
|
||||
check_divisor(points);
|
||||
}
|
||||
|
||||
fn test_subset_sum_to_infinity<C: DivisorCurve>() {
|
||||
// Internally, a binary tree algorithm is used
|
||||
// This executes the first pass to end up with [0, 0] for further reductions
|
||||
{
|
||||
let mut points = vec![C::random(&mut OsRng)];
|
||||
points.push(-points[0]);
|
||||
|
||||
let next = C::random(&mut OsRng);
|
||||
points.push(next);
|
||||
points.push(-next);
|
||||
check_divisor(points);
|
||||
}
|
||||
|
||||
// This executes the first pass to end up with [0, X, -X, 0]
|
||||
{
|
||||
let mut points = vec![C::random(&mut OsRng)];
|
||||
points.push(-points[0]);
|
||||
|
||||
let x_1 = C::random(&mut OsRng);
|
||||
let x_2 = C::random(&mut OsRng);
|
||||
points.push(x_1);
|
||||
points.push(x_2);
|
||||
|
||||
points.push(-x_1);
|
||||
points.push(-x_2);
|
||||
|
||||
let next = C::random(&mut OsRng);
|
||||
points.push(next);
|
||||
points.push(-next);
|
||||
check_divisor(points);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_divisor_pallas() {
|
||||
test_divisor::<Ep>();
|
||||
test_same_point::<Ep>();
|
||||
test_subset_sum_to_infinity::<Ep>();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_divisor_vesta() {
|
||||
test_divisor::<Eq>();
|
||||
test_same_point::<Eq>();
|
||||
test_subset_sum_to_infinity::<Eq>();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_divisor_ed25519() {
|
||||
// Since we're implementing Wei25519 ourselves, check the isomorphism works as expected
|
||||
{
|
||||
let incomplete_add = |p1, p2| {
|
||||
let (x1, y1) = EdwardsPoint::to_xy(p1).unwrap();
|
||||
let (x2, y2) = EdwardsPoint::to_xy(p2).unwrap();
|
||||
|
||||
// mmadd-1998-cmo
|
||||
let u = y2 - y1;
|
||||
let uu = u * u;
|
||||
let v = x2 - x1;
|
||||
let vv = v * v;
|
||||
let vvv = v * vv;
|
||||
let R = vv * x1;
|
||||
let A = uu - vvv - R.double();
|
||||
let x3 = v * A;
|
||||
let y3 = (u * (R - A)) - (vvv * y1);
|
||||
let z3 = vvv;
|
||||
|
||||
// Normalize from XYZ to XY
|
||||
let x3 = x3 * z3.invert().unwrap();
|
||||
let y3 = y3 * z3.invert().unwrap();
|
||||
|
||||
// Edwards addition -> Wei25519 coordinates should be equivalent to Wei25519 addition
|
||||
assert_eq!(EdwardsPoint::to_xy(p1 + p2).unwrap(), (x3, y3));
|
||||
};
|
||||
|
||||
for _ in 0 .. 256 {
|
||||
incomplete_add(EdwardsPoint::random(&mut OsRng), EdwardsPoint::random(&mut OsRng));
|
||||
}
|
||||
}
|
||||
|
||||
test_divisor::<EdwardsPoint>();
|
||||
test_same_point::<EdwardsPoint>();
|
||||
test_subset_sum_to_infinity::<EdwardsPoint>();
|
||||
}
|
||||
129
crypto/evrf/divisors/src/tests/poly.rs
Normal file
129
crypto/evrf/divisors/src/tests/poly.rs
Normal file
@@ -0,0 +1,129 @@
|
||||
use group::ff::Field;
|
||||
use pasta_curves::Ep;
|
||||
|
||||
use crate::{DivisorCurve, Poly};
|
||||
|
||||
type F = <Ep as DivisorCurve>::FieldElement;
|
||||
|
||||
#[test]
|
||||
fn test_poly() {
|
||||
let zero = F::ZERO;
|
||||
let one = F::ONE;
|
||||
|
||||
{
|
||||
let mut poly = Poly::zero();
|
||||
poly.y_coefficients = vec![zero, one];
|
||||
|
||||
let mut modulus = Poly::zero();
|
||||
modulus.y_coefficients = vec![one];
|
||||
assert_eq!(poly % &modulus, Poly::zero());
|
||||
}
|
||||
|
||||
{
|
||||
let mut poly = Poly::zero();
|
||||
poly.y_coefficients = vec![zero, one];
|
||||
|
||||
let mut squared = Poly::zero();
|
||||
squared.y_coefficients = vec![zero, zero, zero, one];
|
||||
assert_eq!(poly.clone() * poly.clone(), squared);
|
||||
}
|
||||
|
||||
{
|
||||
let mut a = Poly::zero();
|
||||
a.zero_coefficient = F::from(2u64);
|
||||
|
||||
let mut b = Poly::zero();
|
||||
b.zero_coefficient = F::from(3u64);
|
||||
|
||||
let mut res = Poly::zero();
|
||||
res.zero_coefficient = F::from(6u64);
|
||||
assert_eq!(a.clone() * b.clone(), res);
|
||||
|
||||
b.y_coefficients = vec![F::from(4u64)];
|
||||
res.y_coefficients = vec![F::from(8u64)];
|
||||
assert_eq!(a.clone() * b.clone(), res);
|
||||
assert_eq!(b.clone() * a.clone(), res);
|
||||
|
||||
a.x_coefficients = vec![F::from(5u64)];
|
||||
res.x_coefficients = vec![F::from(15u64)];
|
||||
res.yx_coefficients = vec![vec![F::from(20u64)]];
|
||||
assert_eq!(a.clone() * b.clone(), res);
|
||||
assert_eq!(b * a.clone(), res);
|
||||
|
||||
// res is now 20xy + 8*y + 15*x + 6
|
||||
// res ** 2 =
|
||||
// 400*x^2*y^2 + 320*x*y^2 + 64*y^2 + 600*x^2*y + 480*x*y + 96*y + 225*x^2 + 180*x + 36
|
||||
|
||||
let mut squared = Poly::zero();
|
||||
squared.y_coefficients = vec![F::from(96u64), F::from(64u64)];
|
||||
squared.yx_coefficients =
|
||||
vec![vec![F::from(480u64), F::from(600u64)], vec![F::from(320u64), F::from(400u64)]];
|
||||
squared.x_coefficients = vec![F::from(180u64), F::from(225u64)];
|
||||
squared.zero_coefficient = F::from(36u64);
|
||||
assert_eq!(res.clone() * res, squared);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_differentation() {
|
||||
let random = || F::random(&mut OsRng);
|
||||
|
||||
let input = Poly {
|
||||
y_coefficients: vec![random()],
|
||||
yx_coefficients: vec![vec![random()]],
|
||||
x_coefficients: vec![random(), random(), random()],
|
||||
zero_coefficient: random(),
|
||||
};
|
||||
let (diff_x, diff_y) = input.differentiate();
|
||||
assert_eq!(
|
||||
diff_x,
|
||||
Poly {
|
||||
y_coefficients: vec![input.yx_coefficients[0][0]],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![
|
||||
F::from(2) * input.x_coefficients[1],
|
||||
F::from(3) * input.x_coefficients[2]
|
||||
],
|
||||
zero_coefficient: input.x_coefficients[0],
|
||||
}
|
||||
);
|
||||
assert_eq!(
|
||||
diff_y,
|
||||
Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![input.yx_coefficients[0][0]],
|
||||
zero_coefficient: input.y_coefficients[0],
|
||||
}
|
||||
);
|
||||
|
||||
let input = Poly {
|
||||
y_coefficients: vec![random()],
|
||||
yx_coefficients: vec![vec![random(), random()]],
|
||||
x_coefficients: vec![random(), random(), random(), random()],
|
||||
zero_coefficient: random(),
|
||||
};
|
||||
let (diff_x, diff_y) = input.differentiate();
|
||||
assert_eq!(
|
||||
diff_x,
|
||||
Poly {
|
||||
y_coefficients: vec![input.yx_coefficients[0][0]],
|
||||
yx_coefficients: vec![vec![F::from(2) * input.yx_coefficients[0][1]]],
|
||||
x_coefficients: vec![
|
||||
F::from(2) * input.x_coefficients[1],
|
||||
F::from(3) * input.x_coefficients[2],
|
||||
F::from(4) * input.x_coefficients[3],
|
||||
],
|
||||
zero_coefficient: input.x_coefficients[0],
|
||||
}
|
||||
);
|
||||
assert_eq!(
|
||||
diff_y,
|
||||
Poly {
|
||||
y_coefficients: vec![],
|
||||
yx_coefficients: vec![],
|
||||
x_coefficients: vec![input.yx_coefficients[0][0], input.yx_coefficients[0][1]],
|
||||
zero_coefficient: input.y_coefficients[0],
|
||||
}
|
||||
);
|
||||
}
|
||||
20
crypto/evrf/ec-gadgets/Cargo.toml
Normal file
20
crypto/evrf/ec-gadgets/Cargo.toml
Normal file
@@ -0,0 +1,20 @@
|
||||
[package]
|
||||
name = "generalized-bulletproofs-ec-gadgets"
|
||||
version = "0.1.0"
|
||||
description = "Gadgets for working with an embedded Elliptic Curve in a Generalized Bulletproofs circuit"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/ec-gadgets"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
keywords = ["bulletproofs", "circuit", "divisors"]
|
||||
edition = "2021"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
generic-array = { version = "1", default-features = false, features = ["alloc"] }
|
||||
|
||||
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
|
||||
|
||||
generalized-bulletproofs-circuit-abstraction = { path = "../circuit-abstraction" }
|
||||
21
crypto/evrf/ec-gadgets/LICENSE
Normal file
21
crypto/evrf/ec-gadgets/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 Luke Parker
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
3
crypto/evrf/ec-gadgets/README.md
Normal file
3
crypto/evrf/ec-gadgets/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Generalized Bulletproofs Circuit Abstraction
|
||||
|
||||
A circuit abstraction around `generalized-bulletproofs`.
|
||||
529
crypto/evrf/ec-gadgets/src/dlog.rs
Normal file
529
crypto/evrf/ec-gadgets/src/dlog.rs
Normal file
@@ -0,0 +1,529 @@
|
||||
use core::fmt;
|
||||
|
||||
use ciphersuite::{
|
||||
group::ff::{Field, PrimeField, BatchInverter},
|
||||
Ciphersuite,
|
||||
};
|
||||
|
||||
use generalized_bulletproofs_circuit_abstraction::*;
|
||||
|
||||
use crate::*;
|
||||
|
||||
/// Parameters for a discrete logarithm proof.
|
||||
///
|
||||
/// This isn't required to be implemented by the Field/Group/Ciphersuite, solely a struct, to
|
||||
/// enable parameterization of discrete log proofs to the bitlength of the discrete logarithm.
|
||||
/// While that may be F::NUM_BITS, a discrete log proof a for a full scalar, it could also be 64,
|
||||
/// a discrete log proof for a u64 (such as if opening a Pedersen commitment in-circuit).
|
||||
pub trait DiscreteLogParameters {
|
||||
/// The amount of bits used to represent a scalar.
|
||||
type ScalarBits: ArrayLength;
|
||||
|
||||
/// The amount of x**i coefficients in a divisor.
|
||||
///
|
||||
/// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) divided
|
||||
/// by two.
|
||||
type XCoefficients: ArrayLength;
|
||||
|
||||
/// The amount of x**i coefficients in a divisor, minus one.
|
||||
type XCoefficientsMinusOne: ArrayLength;
|
||||
|
||||
/// The amount of y x**i coefficients in a divisor.
|
||||
///
|
||||
/// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) plus
|
||||
/// one, divided by two, minus two.
|
||||
type YxCoefficients: ArrayLength;
|
||||
}
|
||||
|
||||
/// A tabled generator for proving/verifying discrete logarithm claims.
|
||||
#[derive(Clone)]
|
||||
pub struct GeneratorTable<F: PrimeField, Parameters: DiscreteLogParameters>(
|
||||
GenericArray<(F, F), Parameters::ScalarBits>,
|
||||
);
|
||||
|
||||
impl<F: PrimeField, Parameters: DiscreteLogParameters> fmt::Debug
|
||||
for GeneratorTable<F, Parameters>
|
||||
{
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fmt
|
||||
.debug_struct("GeneratorTable")
|
||||
.field("x", &self.0[0].0)
|
||||
.field("y", &self.0[0].1)
|
||||
.finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField, Parameters: DiscreteLogParameters> GeneratorTable<F, Parameters> {
|
||||
/// Create a new table for this generator.
|
||||
///
|
||||
/// The generator is assumed to be well-formed and on-curve. This function may panic if it's not.
|
||||
pub fn new(curve: &CurveSpec<F>, generator_x: F, generator_y: F) -> Self {
|
||||
// mdbl-2007-bl
|
||||
fn dbl<F: PrimeField>(a: F, x1: F, y1: F) -> (F, F) {
|
||||
let xx = x1 * x1;
|
||||
let w = a + (xx + xx.double());
|
||||
let y1y1 = y1 * y1;
|
||||
let r = y1y1 + y1y1;
|
||||
let sss = (y1 * r).double().double();
|
||||
let rr = r * r;
|
||||
|
||||
let b = x1 + r;
|
||||
let b = (b * b) - xx - rr;
|
||||
|
||||
let h = (w * w) - b.double();
|
||||
let x3 = h.double() * y1;
|
||||
let y3 = (w * (b - h)) - rr.double();
|
||||
let z3 = sss;
|
||||
|
||||
// Normalize from XYZ to XY
|
||||
let z3_inv = z3.invert().unwrap();
|
||||
let x3 = x3 * z3_inv;
|
||||
let y3 = y3 * z3_inv;
|
||||
|
||||
(x3, y3)
|
||||
}
|
||||
|
||||
let mut res = Self(GenericArray::default());
|
||||
res.0[0] = (generator_x, generator_y);
|
||||
for i in 1 .. Parameters::ScalarBits::USIZE {
|
||||
let last = res.0[i - 1];
|
||||
res.0[i] = dbl(curve.a, last.0, last.1);
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
/// A representation of the divisor.
|
||||
///
|
||||
/// The coefficient for x**1 is explicitly excluded as it's expected to be normalized to 1.
|
||||
#[derive(Clone)]
|
||||
pub struct Divisor<Parameters: DiscreteLogParameters> {
|
||||
/// The coefficient for the `y` term of the divisor.
|
||||
///
|
||||
/// There is never more than one `y**i x**0` coefficient as the leading term of the modulus is
|
||||
/// `y**2`. It's assumed the coefficient is non-zero (and present) as it will be for any divisor
|
||||
/// exceeding trivial complexity.
|
||||
pub y: Variable,
|
||||
/// The coefficients for the `y**1 x**i` terms of the polynomial.
|
||||
// This subtraction enforces the divisor to have at least 4 points which is acceptable.
|
||||
// TODO: Double check these constants
|
||||
pub yx: GenericArray<Variable, Parameters::YxCoefficients>,
|
||||
/// The coefficients for the `x**i` terms of the polynomial, skipping x**1.
|
||||
///
|
||||
/// x**1 is skipped as it's expected to be normalized to 1, and therefore constant, in order to
|
||||
/// ensure the divisor is non-zero (as necessary for the proof to be complete).
|
||||
// Subtract 1 from the length due to skipping the coefficient for x**1
|
||||
pub x_from_power_of_2: GenericArray<Variable, Parameters::XCoefficientsMinusOne>,
|
||||
/// The constant term in the polynomial (alternatively, the coefficient for y**0 x**0).
|
||||
pub zero: Variable,
|
||||
}
|
||||
|
||||
/// A point, its discrete logarithm, and the divisor to prove it.
|
||||
#[derive(Clone)]
|
||||
pub struct PointWithDlog<Parameters: DiscreteLogParameters> {
|
||||
/// The point which is supposedly the result of scaling the generator by the discrete logarithm.
|
||||
pub point: (Variable, Variable),
|
||||
/// The discrete logarithm, represented as coefficients of a polynomial of 2**i.
|
||||
pub dlog: GenericArray<Variable, Parameters::ScalarBits>,
|
||||
/// The divisor interpolating the relevant doublings of generator with the inverse of the point.
|
||||
pub divisor: Divisor<Parameters>,
|
||||
}
|
||||
|
||||
/// A struct containing a point used for the evaluation of a divisor.
|
||||
///
|
||||
/// Preprocesses and caches as much of the calculation as possible to minimize work upon reuse of
|
||||
/// challenge points.
|
||||
struct ChallengePoint<F: PrimeField, Parameters: DiscreteLogParameters> {
|
||||
y: F,
|
||||
yx: GenericArray<F, Parameters::YxCoefficients>,
|
||||
x: GenericArray<F, Parameters::XCoefficients>,
|
||||
p_0_n_0: F,
|
||||
x_p_0_n_0: GenericArray<F, Parameters::YxCoefficients>,
|
||||
p_1_n: F,
|
||||
p_1_d: F,
|
||||
}
|
||||
|
||||
impl<F: PrimeField, Parameters: DiscreteLogParameters> ChallengePoint<F, Parameters> {
|
||||
fn new(
|
||||
curve: &CurveSpec<F>,
|
||||
// The slope between all of the challenge points
|
||||
slope: F,
|
||||
// The x and y coordinates
|
||||
x: F,
|
||||
y: F,
|
||||
// The inversion of twice the y coordinate
|
||||
// We accept this as an argument so that the caller can calculcate these with a batch inversion
|
||||
inv_two_y: F,
|
||||
) -> Self {
|
||||
// Powers of x, skipping x**0
|
||||
let divisor_x_len = Parameters::XCoefficients::USIZE;
|
||||
let mut x_pows = GenericArray::default();
|
||||
x_pows[0] = x;
|
||||
for i in 1 .. divisor_x_len {
|
||||
let last = x_pows[i - 1];
|
||||
x_pows[i] = last * x;
|
||||
}
|
||||
|
||||
// Powers of x multiplied by y
|
||||
let divisor_yx_len = Parameters::YxCoefficients::USIZE;
|
||||
let mut yx = GenericArray::default();
|
||||
// Skips x**0
|
||||
yx[0] = y * x;
|
||||
for i in 1 .. divisor_yx_len {
|
||||
let last = yx[i - 1];
|
||||
yx[i] = last * x;
|
||||
}
|
||||
|
||||
let x_sq = x.square();
|
||||
let three_x_sq = x_sq.double() + x_sq;
|
||||
let three_x_sq_plus_a = three_x_sq + curve.a;
|
||||
let two_y = y.double();
|
||||
|
||||
// p_0_n_0 from `DivisorChallenge`
|
||||
let p_0_n_0 = three_x_sq_plus_a * inv_two_y;
|
||||
let mut x_p_0_n_0 = GenericArray::default();
|
||||
// Since this iterates over x, which skips x**0, this also skips p_0_n_0 x**0
|
||||
for (i, x) in x_pows.iter().take(divisor_yx_len).enumerate() {
|
||||
x_p_0_n_0[i] = p_0_n_0 * x;
|
||||
}
|
||||
|
||||
// p_1_n from `DivisorChallenge`
|
||||
let p_1_n = two_y;
|
||||
// p_1_d from `DivisorChallenge`
|
||||
let p_1_d = (-slope * p_1_n) + three_x_sq_plus_a;
|
||||
|
||||
ChallengePoint { x: x_pows, y, yx, p_0_n_0, x_p_0_n_0, p_1_n, p_1_d }
|
||||
}
|
||||
}
|
||||
|
||||
// `DivisorChallenge` from the section `Discrete Log Proof`
|
||||
fn divisor_challenge_eval<C: Ciphersuite, Parameters: DiscreteLogParameters>(
|
||||
circuit: &mut Circuit<C>,
|
||||
divisor: &Divisor<Parameters>,
|
||||
challenge: &ChallengePoint<C::F, Parameters>,
|
||||
) -> Variable {
|
||||
// The evaluation of the divisor differentiated by y, further multiplied by p_0_n_0
|
||||
// Differentation drops everything without a y coefficient, and drops what remains by a power
|
||||
// of y
|
||||
// (y**1 -> y**0, yx**i -> x**i)
|
||||
// This aligns with p_0_n_1 from `DivisorChallenge`
|
||||
let p_0_n_1 = {
|
||||
let mut p_0_n_1 = LinComb::empty().term(challenge.p_0_n_0, divisor.y);
|
||||
for (j, var) in divisor.yx.iter().enumerate() {
|
||||
// This does not raise by `j + 1` as x_p_0_n_0 omits x**0
|
||||
p_0_n_1 = p_0_n_1.term(challenge.x_p_0_n_0[j], *var);
|
||||
}
|
||||
p_0_n_1
|
||||
};
|
||||
|
||||
// The evaluation of the divisor differentiated by x
|
||||
// This aligns with p_0_n_2 from `DivisorChallenge`
|
||||
let p_0_n_2 = {
|
||||
// The coefficient for x**1 is 1, so 1 becomes the new zero coefficient
|
||||
let mut p_0_n_2 = LinComb::empty().constant(C::F::ONE);
|
||||
|
||||
// Handle the new y coefficient
|
||||
p_0_n_2 = p_0_n_2.term(challenge.y, divisor.yx[0]);
|
||||
|
||||
// Handle the new yx coefficients
|
||||
for (j, yx) in divisor.yx.iter().enumerate().skip(1) {
|
||||
// For the power which was shifted down, we multiply this coefficient
|
||||
// 3 x**2 -> 2 * 3 x**1
|
||||
let original_power_of_x = C::F::from(u64::try_from(j + 1).unwrap());
|
||||
// `j - 1` so `j = 1` indexes yx[0] as yx[0] is the y x**1
|
||||
// (yx omits y x**0)
|
||||
let this_weight = original_power_of_x * challenge.yx[j - 1];
|
||||
p_0_n_2 = p_0_n_2.term(this_weight, *yx);
|
||||
}
|
||||
|
||||
// Handle the x coefficients
|
||||
// We don't skip the first one as `x_from_power_of_2` already omits x**1
|
||||
for (i, x) in divisor.x_from_power_of_2.iter().enumerate() {
|
||||
// i + 2 as the paper expects i to start from 1 and be + 1, yet we start from 0
|
||||
let original_power_of_x = C::F::from(u64::try_from(i + 2).unwrap());
|
||||
// Still x[i] as x[0] is x**1
|
||||
let this_weight = original_power_of_x * challenge.x[i];
|
||||
|
||||
p_0_n_2 = p_0_n_2.term(this_weight, *x);
|
||||
}
|
||||
|
||||
p_0_n_2
|
||||
};
|
||||
|
||||
// p_0_n from `DivisorChallenge`
|
||||
let p_0_n = p_0_n_1 + &p_0_n_2;
|
||||
|
||||
// Evaluation of the divisor
|
||||
// p_0_d from `DivisorChallenge`
|
||||
let p_0_d = {
|
||||
let mut p_0_d = LinComb::empty().term(challenge.y, divisor.y);
|
||||
|
||||
for (var, c_yx) in divisor.yx.iter().zip(&challenge.yx) {
|
||||
p_0_d = p_0_d.term(*c_yx, *var);
|
||||
}
|
||||
|
||||
for (i, var) in divisor.x_from_power_of_2.iter().enumerate() {
|
||||
// This `i+1` is preserved, despite most not being as x omits x**0, as this assumes we
|
||||
// start with `i=1`
|
||||
p_0_d = p_0_d.term(challenge.x[i + 1], *var);
|
||||
}
|
||||
|
||||
// Adding x effectively adds a `1 x` term, ensuring the divisor isn't 0
|
||||
p_0_d.term(C::F::ONE, divisor.zero).constant(challenge.x[0])
|
||||
};
|
||||
|
||||
// Calculate the joint numerator
|
||||
// p_n from `DivisorChallenge`
|
||||
let p_n = p_0_n * challenge.p_1_n;
|
||||
// Calculate the joint denominator
|
||||
// p_d from `DivisorChallenge`
|
||||
let p_d = p_0_d * challenge.p_1_d;
|
||||
|
||||
// We want `n / d = o`
|
||||
// `n / d = o` == `n = d * o`
|
||||
// These are safe unwraps as they're solely done by the prover and should always be non-zero
|
||||
let witness =
|
||||
circuit.eval(&p_d).map(|p_d| (p_d, circuit.eval(&p_n).unwrap() * p_d.invert().unwrap()));
|
||||
let (_l, o, n_claim) = circuit.mul(Some(p_d), None, witness);
|
||||
circuit.equality(p_n, &n_claim.into());
|
||||
o
|
||||
}
|
||||
|
||||
/// A challenge to evaluate divisors with.
|
||||
///
|
||||
/// This challenge must be sampled after writing the commitments to the transcript. This challenge
|
||||
/// is reusable across various divisors.
|
||||
pub struct DiscreteLogChallenge<F: PrimeField, Parameters: DiscreteLogParameters> {
|
||||
c0: ChallengePoint<F, Parameters>,
|
||||
c1: ChallengePoint<F, Parameters>,
|
||||
c2: ChallengePoint<F, Parameters>,
|
||||
slope: F,
|
||||
intercept: F,
|
||||
}
|
||||
|
||||
/// A generator which has been challenged and is ready for use in evaluating discrete logarithm
|
||||
/// claims.
|
||||
pub struct ChallengedGenerator<F: PrimeField, Parameters: DiscreteLogParameters>(
|
||||
GenericArray<F, Parameters::ScalarBits>,
|
||||
);
|
||||
|
||||
/// Gadgets for proving the discrete logarithm of points on an elliptic curve defined over the
|
||||
/// scalar field of the curve of the Bulletproof.
|
||||
pub trait EcDlogGadgets<C: Ciphersuite> {
|
||||
/// Sample a challenge for a series of discrete logarithm claims.
|
||||
///
|
||||
/// This must be called after writing the commitments to the transcript.
|
||||
///
|
||||
/// The generators are assumed to be non-empty. They are not transcripted. If your generators are
|
||||
/// dynamic, they must be properly transcripted into the context.
|
||||
///
|
||||
/// May panic/have undefined behavior if an assumption is broken.
|
||||
#[allow(clippy::type_complexity)]
|
||||
fn discrete_log_challenge<T: Transcript, Parameters: DiscreteLogParameters>(
|
||||
&self,
|
||||
transcript: &mut T,
|
||||
curve: &CurveSpec<C::F>,
|
||||
generators: &[GeneratorTable<C::F, Parameters>],
|
||||
) -> (DiscreteLogChallenge<C::F, Parameters>, Vec<ChallengedGenerator<C::F, Parameters>>);
|
||||
|
||||
/// Prove this point has the specified discrete logarithm over the specified generator.
|
||||
///
|
||||
/// The discrete logarithm is not validated to be in a canonical form. The only guarantee made on
|
||||
/// it is that it's a consistent representation of _a_ discrete logarithm (reuse won't enable
|
||||
/// re-interpretation as a distinct discrete logarithm).
|
||||
///
|
||||
/// This does ensure the point is on-curve.
|
||||
///
|
||||
/// This MUST only be called with `Variable`s present within commitments.
|
||||
///
|
||||
/// May panic/have undefined behavior if an assumption is broken, or if passed an invalid
|
||||
/// witness.
|
||||
fn discrete_log<Parameters: DiscreteLogParameters>(
|
||||
&mut self,
|
||||
curve: &CurveSpec<C::F>,
|
||||
point: PointWithDlog<Parameters>,
|
||||
challenge: &DiscreteLogChallenge<C::F, Parameters>,
|
||||
challenged_generator: &ChallengedGenerator<C::F, Parameters>,
|
||||
) -> OnCurve;
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> EcDlogGadgets<C> for Circuit<C> {
|
||||
// This is part of `DiscreteLog` from `Discrete Log Proof`, specifically, the challenges and
|
||||
// the calculations dependent solely on them
|
||||
fn discrete_log_challenge<T: Transcript, Parameters: DiscreteLogParameters>(
|
||||
&self,
|
||||
transcript: &mut T,
|
||||
curve: &CurveSpec<C::F>,
|
||||
generators: &[GeneratorTable<C::F, Parameters>],
|
||||
) -> (DiscreteLogChallenge<C::F, Parameters>, Vec<ChallengedGenerator<C::F, Parameters>>) {
|
||||
// Get the challenge points
|
||||
// TODO: Implement a proper hash to curve
|
||||
let (c0_x, c0_y) = loop {
|
||||
let c0_x: C::F = transcript.challenge();
|
||||
let Some(c0_y) =
|
||||
Option::<C::F>::from(((c0_x.square() * c0_x) + (curve.a * c0_x) + curve.b).sqrt())
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
// Takes the even y coordinate as to not be dependent on whatever root the above sqrt
|
||||
// happens to returns
|
||||
// TODO: Randomly select which to take
|
||||
break (c0_x, if bool::from(c0_y.is_odd()) { -c0_y } else { c0_y });
|
||||
};
|
||||
let (c1_x, c1_y) = loop {
|
||||
let c1_x: C::F = transcript.challenge();
|
||||
let Some(c1_y) =
|
||||
Option::<C::F>::from(((c1_x.square() * c1_x) + (curve.a * c1_x) + curve.b).sqrt())
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
break (c1_x, if bool::from(c1_y.is_odd()) { -c1_y } else { c1_y });
|
||||
};
|
||||
|
||||
// mmadd-1998-cmo
|
||||
fn incomplete_add<F: PrimeField>(x1: F, y1: F, x2: F, y2: F) -> Option<(F, F)> {
|
||||
if x1 == x2 {
|
||||
None?
|
||||
}
|
||||
|
||||
let u = y2 - y1;
|
||||
let uu = u * u;
|
||||
let v = x2 - x1;
|
||||
let vv = v * v;
|
||||
let vvv = v * vv;
|
||||
let r = vv * x1;
|
||||
let a = uu - vvv - r.double();
|
||||
let x3 = v * a;
|
||||
let y3 = (u * (r - a)) - (vvv * y1);
|
||||
let z3 = vvv;
|
||||
|
||||
// Normalize from XYZ to XY
|
||||
let z3_inv = Option::<F>::from(z3.invert())?;
|
||||
let x3 = x3 * z3_inv;
|
||||
let y3 = y3 * z3_inv;
|
||||
|
||||
Some((x3, y3))
|
||||
}
|
||||
|
||||
let (c2_x, c2_y) = incomplete_add::<C::F>(c0_x, c0_y, c1_x, c1_y)
|
||||
.expect("randomly selected points shared an x coordinate");
|
||||
// We want C0, C1, C2 = -(C0 + C1)
|
||||
let c2_y = -c2_y;
|
||||
|
||||
// Calculate the slope and intercept
|
||||
// Safe invert as these x coordinates must be distinct due to passing the above incomplete_add
|
||||
let slope = (c1_y - c0_y) * (c1_x - c0_x).invert().unwrap();
|
||||
let intercept = c0_y - (slope * c0_x);
|
||||
|
||||
// Calculate the inversions for 2 c_y (for each c) and all of the challenged generators
|
||||
let mut inversions = vec![C::F::ZERO; 3 + (generators.len() * Parameters::ScalarBits::USIZE)];
|
||||
|
||||
// Needed for the left-hand side eval
|
||||
{
|
||||
inversions[0] = c0_y.double();
|
||||
inversions[1] = c1_y.double();
|
||||
inversions[2] = c2_y.double();
|
||||
}
|
||||
|
||||
// Perform the inversions for the generators
|
||||
for (i, generator) in generators.iter().enumerate() {
|
||||
// Needed for the right-hand side eval
|
||||
for (j, generator) in generator.0.iter().enumerate() {
|
||||
// `DiscreteLog` has weights of `(mu - (G_i.y + (slope * G_i.x)))**-1` in its last line
|
||||
inversions[3 + (i * Parameters::ScalarBits::USIZE) + j] =
|
||||
intercept - (generator.1 - (slope * generator.0));
|
||||
}
|
||||
}
|
||||
for challenge_inversion in &inversions {
|
||||
// This should be unreachable barring negligible probability
|
||||
if challenge_inversion.is_zero().into() {
|
||||
panic!("trying to invert 0");
|
||||
}
|
||||
}
|
||||
let mut scratch = vec![C::F::ZERO; inversions.len()];
|
||||
let _ = BatchInverter::invert_with_external_scratch(&mut inversions, &mut scratch);
|
||||
|
||||
let mut inversions = inversions.into_iter();
|
||||
let inv_c0_two_y = inversions.next().unwrap();
|
||||
let inv_c1_two_y = inversions.next().unwrap();
|
||||
let inv_c2_two_y = inversions.next().unwrap();
|
||||
|
||||
let c0 = ChallengePoint::new(curve, slope, c0_x, c0_y, inv_c0_two_y);
|
||||
let c1 = ChallengePoint::new(curve, slope, c1_x, c1_y, inv_c1_two_y);
|
||||
let c2 = ChallengePoint::new(curve, slope, c2_x, c2_y, inv_c2_two_y);
|
||||
|
||||
// Fill in the inverted values
|
||||
let mut challenged_generators = Vec::with_capacity(generators.len());
|
||||
for _ in 0 .. generators.len() {
|
||||
let mut challenged_generator = GenericArray::default();
|
||||
for i in 0 .. Parameters::ScalarBits::USIZE {
|
||||
challenged_generator[i] = inversions.next().unwrap();
|
||||
}
|
||||
challenged_generators.push(ChallengedGenerator(challenged_generator));
|
||||
}
|
||||
|
||||
(DiscreteLogChallenge { c0, c1, c2, slope, intercept }, challenged_generators)
|
||||
}
|
||||
|
||||
// `DiscreteLog` from `Discrete Log Proof`
|
||||
fn discrete_log<Parameters: DiscreteLogParameters>(
|
||||
&mut self,
|
||||
curve: &CurveSpec<C::F>,
|
||||
point: PointWithDlog<Parameters>,
|
||||
challenge: &DiscreteLogChallenge<C::F, Parameters>,
|
||||
challenged_generator: &ChallengedGenerator<C::F, Parameters>,
|
||||
) -> OnCurve {
|
||||
let PointWithDlog { divisor, dlog, point } = point;
|
||||
|
||||
// Ensure this is being safely called
|
||||
let arg_iter = [point.0, point.1, divisor.y, divisor.zero];
|
||||
let arg_iter = arg_iter.iter().chain(divisor.yx.iter());
|
||||
let arg_iter = arg_iter.chain(divisor.x_from_power_of_2.iter());
|
||||
let arg_iter = arg_iter.chain(dlog.iter());
|
||||
for variable in arg_iter {
|
||||
debug_assert!(
|
||||
matches!(variable, Variable::CG { .. } | Variable::CH { .. } | Variable::V(_)),
|
||||
"discrete log proofs requires all arguments belong to commitments",
|
||||
);
|
||||
}
|
||||
|
||||
// Check the point is on curve
|
||||
let point = self.on_curve(curve, point);
|
||||
|
||||
// The challenge has already been sampled so those lines aren't necessary
|
||||
|
||||
// lhs from the paper, evaluating the divisor
|
||||
let lhs_eval = LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c0)) +
|
||||
&LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c1)) +
|
||||
&LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c2));
|
||||
|
||||
// Interpolate the doublings of the generator
|
||||
let mut rhs_eval = LinComb::empty();
|
||||
// We call this `bit` yet it's not constrained to being a bit
|
||||
// It's presumed to be yet may be malleated
|
||||
for (bit, weight) in dlog.into_iter().zip(&challenged_generator.0) {
|
||||
rhs_eval = rhs_eval.term(*weight, bit);
|
||||
}
|
||||
|
||||
// Interpolate the output point
|
||||
// intercept - (y - (slope * x))
|
||||
// intercept - y + (slope * x)
|
||||
// -y + (slope * x) + intercept
|
||||
// EXCEPT the output point we're proving the discrete log for isn't the one interpolated
|
||||
// Its negative is, so -y becomes y
|
||||
// y + (slope * x) + intercept
|
||||
let output_interpolation = LinComb::empty()
|
||||
.constant(challenge.intercept)
|
||||
.term(C::F::ONE, point.y)
|
||||
.term(challenge.slope, point.x);
|
||||
let output_interpolation_eval = self.eval(&output_interpolation);
|
||||
let (_output_interpolation, inverse) =
|
||||
self.inverse(Some(output_interpolation), output_interpolation_eval);
|
||||
rhs_eval = rhs_eval.term(C::F::ONE, inverse);
|
||||
|
||||
self.equality(lhs_eval, &rhs_eval);
|
||||
|
||||
point
|
||||
}
|
||||
}
|
||||
130
crypto/evrf/ec-gadgets/src/lib.rs
Normal file
130
crypto/evrf/ec-gadgets/src/lib.rs
Normal file
@@ -0,0 +1,130 @@
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
#![doc = include_str!("../README.md")]
|
||||
#![deny(missing_docs)]
|
||||
#![allow(non_snake_case)]
|
||||
|
||||
use generic_array::{typenum::Unsigned, ArrayLength, GenericArray};
|
||||
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite};
|
||||
|
||||
use generalized_bulletproofs_circuit_abstraction::*;
|
||||
|
||||
mod dlog;
|
||||
pub use dlog::*;
|
||||
|
||||
/// The specification of a short Weierstrass curve over the field `F`.
|
||||
///
|
||||
/// The short Weierstrass curve is defined via the formula `y**2 = x**3 + a*x + b`.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub struct CurveSpec<F> {
|
||||
/// The `a` constant in the curve formula.
|
||||
pub a: F,
|
||||
/// The `b` constant in the curve formula.
|
||||
pub b: F,
|
||||
}
|
||||
|
||||
/// A struct for a point on a towered curve which has been confirmed to be on-curve.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub struct OnCurve {
|
||||
pub(crate) x: Variable,
|
||||
pub(crate) y: Variable,
|
||||
}
|
||||
|
||||
impl OnCurve {
|
||||
/// The variable for the x-coordinate.
|
||||
pub fn x(&self) -> Variable {
|
||||
self.x
|
||||
}
|
||||
/// The variable for the y-coordinate.
|
||||
pub fn y(&self) -> Variable {
|
||||
self.y
|
||||
}
|
||||
}
|
||||
|
||||
/// Gadgets for working with points on an elliptic curve defined over the scalar field of the curve
|
||||
/// of the Bulletproof.
|
||||
pub trait EcGadgets<C: Ciphersuite> {
|
||||
/// Constrain an x and y coordinate as being on the specified curve.
|
||||
///
|
||||
/// The specified curve is defined over the scalar field of the curve this proof is performed
|
||||
/// over, offering efficient arithmetic.
|
||||
///
|
||||
/// May panic if the prover and the point is not actually on-curve.
|
||||
fn on_curve(&mut self, curve: &CurveSpec<C::F>, point: (Variable, Variable)) -> OnCurve;
|
||||
|
||||
/// Perform incomplete addition for a fixed point and an on-curve point.
|
||||
///
|
||||
/// `a` is the x and y coordinates of the fixed point, assumed to be on-curve.
|
||||
///
|
||||
/// `b` is a point prior checked to be on-curve.
|
||||
///
|
||||
/// `c` is a point prior checked to be on-curve, constrained to be the sum of `a` and `b`.
|
||||
///
|
||||
/// `a` and `b` are checked to have distinct x coordinates.
|
||||
///
|
||||
/// This function may panic if `a` is malformed or if the prover and `c` is not actually the sum
|
||||
/// of `a` and `b`.
|
||||
fn incomplete_add_fixed(&mut self, a: (C::F, C::F), b: OnCurve, c: OnCurve) -> OnCurve;
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> EcGadgets<C> for Circuit<C> {
|
||||
fn on_curve(&mut self, curve: &CurveSpec<C::F>, (x, y): (Variable, Variable)) -> OnCurve {
|
||||
let x_eval = self.eval(&LinComb::from(x));
|
||||
let (_x, _x_2, x2) =
|
||||
self.mul(Some(LinComb::from(x)), Some(LinComb::from(x)), x_eval.map(|x| (x, x)));
|
||||
let (_x, _x_2, x3) =
|
||||
self.mul(Some(LinComb::from(x2)), Some(LinComb::from(x)), x_eval.map(|x| (x * x, x)));
|
||||
let expected_y2 = LinComb::from(x3).term(curve.a, x).constant(curve.b);
|
||||
|
||||
let y_eval = self.eval(&LinComb::from(y));
|
||||
let (_y, _y_2, y2) =
|
||||
self.mul(Some(LinComb::from(y)), Some(LinComb::from(y)), y_eval.map(|y| (y, y)));
|
||||
|
||||
self.equality(y2.into(), &expected_y2);
|
||||
|
||||
OnCurve { x, y }
|
||||
}
|
||||
|
||||
fn incomplete_add_fixed(&mut self, a: (C::F, C::F), b: OnCurve, c: OnCurve) -> OnCurve {
|
||||
// Check b.x != a.0
|
||||
{
|
||||
let bx_lincomb = LinComb::from(b.x);
|
||||
let bx_eval = self.eval(&bx_lincomb);
|
||||
self.inequality(bx_lincomb, &LinComb::empty().constant(a.0), bx_eval.map(|bx| (bx, a.0)));
|
||||
}
|
||||
|
||||
let (x0, y0) = (a.0, a.1);
|
||||
let (x1, y1) = (b.x, b.y);
|
||||
let (x2, y2) = (c.x, c.y);
|
||||
|
||||
let slope_eval = self.eval(&LinComb::from(x1)).map(|x1| {
|
||||
let y1 = self.eval(&LinComb::from(b.y)).unwrap();
|
||||
|
||||
(y1 - y0) * (x1 - x0).invert().unwrap()
|
||||
});
|
||||
|
||||
// slope * (x1 - x0) = y1 - y0
|
||||
let x1_minus_x0 = LinComb::from(x1).constant(-x0);
|
||||
let x1_minus_x0_eval = self.eval(&x1_minus_x0);
|
||||
let (slope, _r, o) =
|
||||
self.mul(None, Some(x1_minus_x0), slope_eval.map(|slope| (slope, x1_minus_x0_eval.unwrap())));
|
||||
self.equality(LinComb::from(o), &LinComb::from(y1).constant(-y0));
|
||||
|
||||
// slope * (x2 - x0) = -y2 - y0
|
||||
let x2_minus_x0 = LinComb::from(x2).constant(-x0);
|
||||
let x2_minus_x0_eval = self.eval(&x2_minus_x0);
|
||||
let (_slope, _x2_minus_x0, o) = self.mul(
|
||||
Some(slope.into()),
|
||||
Some(x2_minus_x0),
|
||||
slope_eval.map(|slope| (slope, x2_minus_x0_eval.unwrap())),
|
||||
);
|
||||
self.equality(o.into(), &LinComb::empty().term(-C::F::ONE, y2).constant(-y0));
|
||||
|
||||
// slope * slope = x0 + x1 + x2
|
||||
let (_slope, _slope_2, o) =
|
||||
self.mul(Some(slope.into()), Some(slope.into()), slope_eval.map(|slope| (slope, slope)));
|
||||
self.equality(o.into(), &LinComb::from(x1).term(C::F::ONE, x2).constant(x0));
|
||||
|
||||
OnCurve { x: x2, y: y2 }
|
||||
}
|
||||
}
|
||||
39
crypto/evrf/embedwards25519/Cargo.toml
Normal file
39
crypto/evrf/embedwards25519/Cargo.toml
Normal file
@@ -0,0 +1,39 @@
|
||||
[package]
|
||||
name = "embedwards25519"
|
||||
version = "0.1.0"
|
||||
description = "A curve defined over the Ed25519 scalar field"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/embedwards25519"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
keywords = ["curve25519", "ed25519", "ristretto255", "group"]
|
||||
edition = "2021"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
rustversion = "1"
|
||||
hex-literal = { version = "0.4", default-features = false }
|
||||
|
||||
rand_core = { version = "0.6", default-features = false, features = ["std"] }
|
||||
|
||||
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
|
||||
subtle = { version = "^2.4", default-features = false, features = ["std"] }
|
||||
|
||||
generic-array = { version = "0.14", default-features = false }
|
||||
crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] }
|
||||
|
||||
dalek-ff-group = { path = "../../dalek-ff-group", version = "0.4", default-features = false }
|
||||
|
||||
blake2 = { version = "0.10", default-features = false, features = ["std"] }
|
||||
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
|
||||
ec-divisors = { path = "../divisors" }
|
||||
generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" }
|
||||
|
||||
[dev-dependencies]
|
||||
hex = "0.4"
|
||||
|
||||
rand_core = { version = "0.6", features = ["std"] }
|
||||
|
||||
ff-group-tests = { path = "../../ff-group-tests" }
|
||||
21
crypto/evrf/embedwards25519/LICENSE
Normal file
21
crypto/evrf/embedwards25519/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022-2024 Luke Parker
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
21
crypto/evrf/embedwards25519/README.md
Normal file
21
crypto/evrf/embedwards25519/README.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# embedwards25519
|
||||
|
||||
A curve defined over the Ed25519 scalar field.
|
||||
|
||||
This curve was found via
|
||||
[tevador's script](https://gist.github.com/tevador/4524c2092178df08996487d4e272b096)
|
||||
for finding curves (specifically, curve cycles), modified to search for curves
|
||||
whose field is the Ed25519 scalar field (not the Ed25519 field).
|
||||
|
||||
```
|
||||
p = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed
|
||||
q = 0x0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b
|
||||
D = -420435
|
||||
y^2 = x^3 - 3*x + 4188043517836764736459661287169077812555441231147410753119540549773825148767
|
||||
```
|
||||
|
||||
The embedding degree is `(q-1)/2`.
|
||||
|
||||
This curve should not be used with single-coordinate ladders, and points should
|
||||
always be represented in a compressed form (preventing receiving off-curve
|
||||
points).
|
||||
293
crypto/evrf/embedwards25519/src/backend.rs
Normal file
293
crypto/evrf/embedwards25519/src/backend.rs
Normal file
@@ -0,0 +1,293 @@
|
||||
use zeroize::Zeroize;
|
||||
|
||||
// Use black_box when possible
|
||||
#[rustversion::since(1.66)]
|
||||
use core::hint::black_box;
|
||||
#[rustversion::before(1.66)]
|
||||
fn black_box<T>(val: T) -> T {
|
||||
val
|
||||
}
|
||||
|
||||
pub(crate) fn u8_from_bool(bit_ref: &mut bool) -> u8 {
|
||||
let bit_ref = black_box(bit_ref);
|
||||
|
||||
let mut bit = black_box(*bit_ref);
|
||||
let res = black_box(bit as u8);
|
||||
bit.zeroize();
|
||||
debug_assert!((res | 1) == 1);
|
||||
|
||||
bit_ref.zeroize();
|
||||
res
|
||||
}
|
||||
|
||||
macro_rules! math_op {
|
||||
(
|
||||
$Value: ident,
|
||||
$Other: ident,
|
||||
$Op: ident,
|
||||
$op_fn: ident,
|
||||
$Assign: ident,
|
||||
$assign_fn: ident,
|
||||
$function: expr
|
||||
) => {
|
||||
impl $Op<$Other> for $Value {
|
||||
type Output = $Value;
|
||||
fn $op_fn(self, other: $Other) -> Self::Output {
|
||||
Self($function(self.0, other.0))
|
||||
}
|
||||
}
|
||||
impl $Assign<$Other> for $Value {
|
||||
fn $assign_fn(&mut self, other: $Other) {
|
||||
self.0 = $function(self.0, other.0);
|
||||
}
|
||||
}
|
||||
impl<'a> $Op<&'a $Other> for $Value {
|
||||
type Output = $Value;
|
||||
fn $op_fn(self, other: &'a $Other) -> Self::Output {
|
||||
Self($function(self.0, other.0))
|
||||
}
|
||||
}
|
||||
impl<'a> $Assign<&'a $Other> for $Value {
|
||||
fn $assign_fn(&mut self, other: &'a $Other) {
|
||||
self.0 = $function(self.0, other.0);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
macro_rules! from_wrapper {
|
||||
($wrapper: ident, $inner: ident, $uint: ident) => {
|
||||
impl From<$uint> for $wrapper {
|
||||
fn from(a: $uint) -> $wrapper {
|
||||
Self(Residue::new(&$inner::from(a)))
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
macro_rules! field {
|
||||
(
|
||||
$FieldName: ident,
|
||||
$ResidueType: ident,
|
||||
|
||||
$MODULUS_STR: ident,
|
||||
$MODULUS: ident,
|
||||
$WIDE_MODULUS: ident,
|
||||
|
||||
$NUM_BITS: literal,
|
||||
$MULTIPLICATIVE_GENERATOR: literal,
|
||||
$S: literal,
|
||||
$ROOT_OF_UNITY: literal,
|
||||
$DELTA: literal,
|
||||
) => {
|
||||
use core::{
|
||||
ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign},
|
||||
iter::{Sum, Product},
|
||||
};
|
||||
|
||||
use subtle::{Choice, CtOption, ConstantTimeEq, ConstantTimeLess, ConditionallySelectable};
|
||||
use rand_core::RngCore;
|
||||
|
||||
use crypto_bigint::{Integer, NonZero, Encoding, impl_modulus};
|
||||
|
||||
use ciphersuite::group::ff::{
|
||||
Field, PrimeField, FieldBits, PrimeFieldBits, helpers::sqrt_ratio_generic,
|
||||
};
|
||||
|
||||
use $crate::backend::u8_from_bool;
|
||||
|
||||
fn reduce(x: U512) -> U256 {
|
||||
U256::from_le_slice(&x.rem(&NonZero::new($WIDE_MODULUS).unwrap()).to_le_bytes()[.. 32])
|
||||
}
|
||||
|
||||
impl ConstantTimeEq for $FieldName {
|
||||
fn ct_eq(&self, other: &Self) -> Choice {
|
||||
self.0.ct_eq(&other.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl ConditionallySelectable for $FieldName {
|
||||
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
|
||||
$FieldName(Residue::conditional_select(&a.0, &b.0, choice))
|
||||
}
|
||||
}
|
||||
|
||||
math_op!($FieldName, $FieldName, Add, add, AddAssign, add_assign, |x: $ResidueType, y| x
|
||||
.add(&y));
|
||||
math_op!($FieldName, $FieldName, Sub, sub, SubAssign, sub_assign, |x: $ResidueType, y| x
|
||||
.sub(&y));
|
||||
math_op!($FieldName, $FieldName, Mul, mul, MulAssign, mul_assign, |x: $ResidueType, y| x
|
||||
.mul(&y));
|
||||
|
||||
from_wrapper!($FieldName, U256, u8);
|
||||
from_wrapper!($FieldName, U256, u16);
|
||||
from_wrapper!($FieldName, U256, u32);
|
||||
from_wrapper!($FieldName, U256, u64);
|
||||
from_wrapper!($FieldName, U256, u128);
|
||||
|
||||
impl Neg for $FieldName {
|
||||
type Output = $FieldName;
|
||||
fn neg(self) -> $FieldName {
|
||||
Self(self.0.neg())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Neg for &'a $FieldName {
|
||||
type Output = $FieldName;
|
||||
fn neg(self) -> Self::Output {
|
||||
(*self).neg()
|
||||
}
|
||||
}
|
||||
|
||||
impl $FieldName {
|
||||
/// Perform an exponentation.
|
||||
pub fn pow(&self, other: $FieldName) -> $FieldName {
|
||||
let mut table = [Self(Residue::ONE); 16];
|
||||
table[1] = *self;
|
||||
for i in 2 .. 16 {
|
||||
table[i] = table[i - 1] * self;
|
||||
}
|
||||
|
||||
let mut res = Self(Residue::ONE);
|
||||
let mut bits = 0;
|
||||
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
|
||||
bits <<= 1;
|
||||
let mut bit = u8_from_bool(bit.deref_mut());
|
||||
bits |= bit;
|
||||
bit.zeroize();
|
||||
|
||||
if ((i + 1) % 4) == 0 {
|
||||
if i != 3 {
|
||||
for _ in 0 .. 4 {
|
||||
res *= res;
|
||||
}
|
||||
}
|
||||
|
||||
let mut factor = table[0];
|
||||
for (j, candidate) in table[1 ..].iter().enumerate() {
|
||||
let j = j + 1;
|
||||
factor = Self::conditional_select(&factor, &candidate, usize::from(bits).ct_eq(&j));
|
||||
}
|
||||
res *= factor;
|
||||
bits = 0;
|
||||
}
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl Field for $FieldName {
|
||||
const ZERO: Self = Self(Residue::ZERO);
|
||||
const ONE: Self = Self(Residue::ONE);
|
||||
|
||||
fn random(mut rng: impl RngCore) -> Self {
|
||||
let mut bytes = [0; 64];
|
||||
rng.fill_bytes(&mut bytes);
|
||||
$FieldName(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref()))))
|
||||
}
|
||||
|
||||
fn square(&self) -> Self {
|
||||
Self(self.0.square())
|
||||
}
|
||||
fn double(&self) -> Self {
|
||||
*self + self
|
||||
}
|
||||
|
||||
fn invert(&self) -> CtOption<Self> {
|
||||
let res = self.0.invert();
|
||||
CtOption::new(Self(res.0), res.1.into())
|
||||
}
|
||||
|
||||
fn sqrt(&self) -> CtOption<Self> {
|
||||
// (p + 1) // 4, as valid since p % 4 == 3
|
||||
let mod_plus_one_div_four = $MODULUS.saturating_add(&U256::ONE).wrapping_div(&(4u8.into()));
|
||||
let res = self.pow(Self($ResidueType::new_checked(&mod_plus_one_div_four).unwrap()));
|
||||
CtOption::new(res, res.square().ct_eq(self))
|
||||
}
|
||||
|
||||
fn sqrt_ratio(num: &Self, div: &Self) -> (Choice, Self) {
|
||||
sqrt_ratio_generic(num, div)
|
||||
}
|
||||
}
|
||||
|
||||
impl PrimeField for $FieldName {
|
||||
type Repr = [u8; 32];
|
||||
|
||||
const MODULUS: &'static str = $MODULUS_STR;
|
||||
|
||||
const NUM_BITS: u32 = $NUM_BITS;
|
||||
const CAPACITY: u32 = $NUM_BITS - 1;
|
||||
|
||||
const TWO_INV: Self = $FieldName($ResidueType::new(&U256::from_u8(2)).invert().0);
|
||||
|
||||
const MULTIPLICATIVE_GENERATOR: Self =
|
||||
Self(Residue::new(&U256::from_u8($MULTIPLICATIVE_GENERATOR)));
|
||||
const S: u32 = $S;
|
||||
|
||||
const ROOT_OF_UNITY: Self = $FieldName(Residue::new(&U256::from_be_hex($ROOT_OF_UNITY)));
|
||||
const ROOT_OF_UNITY_INV: Self = Self(Self::ROOT_OF_UNITY.0.invert().0);
|
||||
|
||||
const DELTA: Self = $FieldName(Residue::new(&U256::from_be_hex($DELTA)));
|
||||
|
||||
fn from_repr(bytes: Self::Repr) -> CtOption<Self> {
|
||||
let res = U256::from_le_slice(&bytes);
|
||||
CtOption::new($FieldName(Residue::new(&res)), res.ct_lt(&$MODULUS))
|
||||
}
|
||||
fn to_repr(&self) -> Self::Repr {
|
||||
let mut repr = [0; 32];
|
||||
repr.copy_from_slice(&self.0.retrieve().to_le_bytes());
|
||||
repr
|
||||
}
|
||||
|
||||
fn is_odd(&self) -> Choice {
|
||||
self.0.retrieve().is_odd()
|
||||
}
|
||||
}
|
||||
|
||||
impl PrimeFieldBits for $FieldName {
|
||||
type ReprBits = [u8; 32];
|
||||
|
||||
fn to_le_bits(&self) -> FieldBits<Self::ReprBits> {
|
||||
self.to_repr().into()
|
||||
}
|
||||
|
||||
fn char_le_bits() -> FieldBits<Self::ReprBits> {
|
||||
let mut repr = [0; 32];
|
||||
repr.copy_from_slice(&MODULUS.to_le_bytes());
|
||||
repr.into()
|
||||
}
|
||||
}
|
||||
|
||||
impl Sum<$FieldName> for $FieldName {
|
||||
fn sum<I: Iterator<Item = $FieldName>>(iter: I) -> $FieldName {
|
||||
let mut res = $FieldName::ZERO;
|
||||
for item in iter {
|
||||
res += item;
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Sum<&'a $FieldName> for $FieldName {
|
||||
fn sum<I: Iterator<Item = &'a $FieldName>>(iter: I) -> $FieldName {
|
||||
iter.cloned().sum()
|
||||
}
|
||||
}
|
||||
|
||||
impl Product<$FieldName> for $FieldName {
|
||||
fn product<I: Iterator<Item = $FieldName>>(iter: I) -> $FieldName {
|
||||
let mut res = $FieldName::ONE;
|
||||
for item in iter {
|
||||
res *= item;
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Product<&'a $FieldName> for $FieldName {
|
||||
fn product<I: Iterator<Item = &'a $FieldName>>(iter: I) -> $FieldName {
|
||||
iter.cloned().product()
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
47
crypto/evrf/embedwards25519/src/lib.rs
Normal file
47
crypto/evrf/embedwards25519/src/lib.rs
Normal file
@@ -0,0 +1,47 @@
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
#![doc = include_str!("../README.md")]
|
||||
|
||||
use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2};
|
||||
use ciphersuite::group::{ff::PrimeField, Group};
|
||||
|
||||
#[macro_use]
|
||||
mod backend;
|
||||
|
||||
mod scalar;
|
||||
pub use scalar::Scalar;
|
||||
|
||||
pub use dalek_ff_group::Scalar as FieldElement;
|
||||
|
||||
mod point;
|
||||
pub use point::Point;
|
||||
|
||||
/// Ciphersuite for Embedwards25519.
|
||||
///
|
||||
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
|
||||
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
|
||||
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug, zeroize::Zeroize)]
|
||||
pub struct Embedwards25519;
|
||||
impl ciphersuite::Ciphersuite for Embedwards25519 {
|
||||
type F = Scalar;
|
||||
type G = Point;
|
||||
type H = blake2::Blake2b512;
|
||||
|
||||
const ID: &'static [u8] = b"embedwards25519";
|
||||
|
||||
fn generator() -> Self::G {
|
||||
Point::generator()
|
||||
}
|
||||
|
||||
fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F {
|
||||
use blake2::Digest;
|
||||
Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Embedwards25519 {
|
||||
type ScalarBits = U<{ Scalar::NUM_BITS as usize }>;
|
||||
type XCoefficients = Quot<Sum<Self::ScalarBits, U1>, U2>;
|
||||
type XCoefficientsMinusOne = Diff<Self::XCoefficients, U1>;
|
||||
type YxCoefficients = Diff<Quot<Sum<Sum<Self::ScalarBits, U1>, U1>, U2>, U2>;
|
||||
}
|
||||
415
crypto/evrf/embedwards25519/src/point.rs
Normal file
415
crypto/evrf/embedwards25519/src/point.rs
Normal file
@@ -0,0 +1,415 @@
|
||||
use core::{
|
||||
ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign},
|
||||
iter::Sum,
|
||||
};
|
||||
|
||||
use rand_core::RngCore;
|
||||
|
||||
use zeroize::Zeroize;
|
||||
use subtle::{Choice, CtOption, ConstantTimeEq, ConditionallySelectable};
|
||||
|
||||
use ciphersuite::group::{
|
||||
ff::{Field, PrimeField, PrimeFieldBits},
|
||||
Group, GroupEncoding,
|
||||
prime::PrimeGroup,
|
||||
};
|
||||
|
||||
use crate::{backend::u8_from_bool, Scalar, FieldElement};
|
||||
|
||||
#[allow(non_snake_case)]
|
||||
fn B() -> FieldElement {
|
||||
FieldElement::from_repr(hex_literal::hex!(
|
||||
"5f07603a853f20370b682036210d463e64903a23ea669d07ca26cfc13f594209"
|
||||
))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn recover_y(x: FieldElement) -> CtOption<FieldElement> {
|
||||
// x**3 - 3 * x + B
|
||||
((x.square() * x) - (x.double() + x) + B()).sqrt()
|
||||
}
|
||||
|
||||
/// Point.
|
||||
#[derive(Clone, Copy, Debug, Zeroize)]
|
||||
#[repr(C)]
|
||||
pub struct Point {
|
||||
x: FieldElement, // / Z
|
||||
y: FieldElement, // / Z
|
||||
z: FieldElement,
|
||||
}
|
||||
|
||||
impl ConstantTimeEq for Point {
|
||||
fn ct_eq(&self, other: &Self) -> Choice {
|
||||
let x1 = self.x * other.z;
|
||||
let x2 = other.x * self.z;
|
||||
|
||||
let y1 = self.y * other.z;
|
||||
let y2 = other.y * self.z;
|
||||
|
||||
(self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2))
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq for Point {
|
||||
fn eq(&self, other: &Point) -> bool {
|
||||
self.ct_eq(other).into()
|
||||
}
|
||||
}
|
||||
|
||||
impl Eq for Point {}
|
||||
|
||||
impl ConditionallySelectable for Point {
|
||||
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
|
||||
Point {
|
||||
x: FieldElement::conditional_select(&a.x, &b.x, choice),
|
||||
y: FieldElement::conditional_select(&a.y, &b.y, choice),
|
||||
z: FieldElement::conditional_select(&a.z, &b.z, choice),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Add for Point {
|
||||
type Output = Point;
|
||||
#[allow(non_snake_case)]
|
||||
fn add(self, other: Self) -> Self {
|
||||
// add-2015-rcb
|
||||
|
||||
let a = -FieldElement::from(3u64);
|
||||
let B = B();
|
||||
let b3 = B + B + B;
|
||||
|
||||
let X1 = self.x;
|
||||
let Y1 = self.y;
|
||||
let Z1 = self.z;
|
||||
let X2 = other.x;
|
||||
let Y2 = other.y;
|
||||
let Z2 = other.z;
|
||||
|
||||
let t0 = X1 * X2;
|
||||
let t1 = Y1 * Y2;
|
||||
let t2 = Z1 * Z2;
|
||||
let t3 = X1 + Y1;
|
||||
let t4 = X2 + Y2;
|
||||
let t3 = t3 * t4;
|
||||
let t4 = t0 + t1;
|
||||
let t3 = t3 - t4;
|
||||
let t4 = X1 + Z1;
|
||||
let t5 = X2 + Z2;
|
||||
let t4 = t4 * t5;
|
||||
let t5 = t0 + t2;
|
||||
let t4 = t4 - t5;
|
||||
let t5 = Y1 + Z1;
|
||||
let X3 = Y2 + Z2;
|
||||
let t5 = t5 * X3;
|
||||
let X3 = t1 + t2;
|
||||
let t5 = t5 - X3;
|
||||
let Z3 = a * t4;
|
||||
let X3 = b3 * t2;
|
||||
let Z3 = X3 + Z3;
|
||||
let X3 = t1 - Z3;
|
||||
let Z3 = t1 + Z3;
|
||||
let Y3 = X3 * Z3;
|
||||
let t1 = t0 + t0;
|
||||
let t1 = t1 + t0;
|
||||
let t2 = a * t2;
|
||||
let t4 = b3 * t4;
|
||||
let t1 = t1 + t2;
|
||||
let t2 = t0 - t2;
|
||||
let t2 = a * t2;
|
||||
let t4 = t4 + t2;
|
||||
let t0 = t1 * t4;
|
||||
let Y3 = Y3 + t0;
|
||||
let t0 = t5 * t4;
|
||||
let X3 = t3 * X3;
|
||||
let X3 = X3 - t0;
|
||||
let t0 = t3 * t1;
|
||||
let Z3 = t5 * Z3;
|
||||
let Z3 = Z3 + t0;
|
||||
Point { x: X3, y: Y3, z: Z3 }
|
||||
}
|
||||
}
|
||||
|
||||
impl AddAssign for Point {
|
||||
fn add_assign(&mut self, other: Point) {
|
||||
*self = *self + other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Add<&Point> for Point {
|
||||
type Output = Point;
|
||||
fn add(self, other: &Point) -> Point {
|
||||
self + *other
|
||||
}
|
||||
}
|
||||
|
||||
impl AddAssign<&Point> for Point {
|
||||
fn add_assign(&mut self, other: &Point) {
|
||||
*self += *other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Neg for Point {
|
||||
type Output = Point;
|
||||
fn neg(self) -> Self {
|
||||
Point { x: self.x, y: -self.y, z: self.z }
|
||||
}
|
||||
}
|
||||
|
||||
impl Sub for Point {
|
||||
type Output = Point;
|
||||
#[allow(clippy::suspicious_arithmetic_impl)]
|
||||
fn sub(self, other: Self) -> Self {
|
||||
self + other.neg()
|
||||
}
|
||||
}
|
||||
|
||||
impl SubAssign for Point {
|
||||
fn sub_assign(&mut self, other: Point) {
|
||||
*self = *self - other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Sub<&Point> for Point {
|
||||
type Output = Point;
|
||||
fn sub(self, other: &Point) -> Point {
|
||||
self - *other
|
||||
}
|
||||
}
|
||||
|
||||
impl SubAssign<&Point> for Point {
|
||||
fn sub_assign(&mut self, other: &Point) {
|
||||
*self -= *other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Group for Point {
|
||||
type Scalar = Scalar;
|
||||
fn random(mut rng: impl RngCore) -> Self {
|
||||
loop {
|
||||
let mut bytes = [0; 32];
|
||||
rng.fill_bytes(bytes.as_mut());
|
||||
let opt = Self::from_bytes(&bytes);
|
||||
if opt.is_some().into() {
|
||||
return opt.unwrap();
|
||||
}
|
||||
}
|
||||
}
|
||||
fn identity() -> Self {
|
||||
Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO }
|
||||
}
|
||||
fn generator() -> Self {
|
||||
Point {
|
||||
x: FieldElement::from_repr(hex_literal::hex!(
|
||||
"0100000000000000000000000000000000000000000000000000000000000000"
|
||||
))
|
||||
.unwrap(),
|
||||
y: FieldElement::from_repr(hex_literal::hex!(
|
||||
"2e4118080a484a3dfbafe2199a0e36b7193581d676c0dadfa376b0265616020c"
|
||||
))
|
||||
.unwrap(),
|
||||
z: FieldElement::ONE,
|
||||
}
|
||||
}
|
||||
fn is_identity(&self) -> Choice {
|
||||
self.z.ct_eq(&FieldElement::ZERO)
|
||||
}
|
||||
#[allow(non_snake_case)]
|
||||
fn double(&self) -> Self {
|
||||
// dbl-2007-bl-2
|
||||
let X1 = self.x;
|
||||
let Y1 = self.y;
|
||||
let Z1 = self.z;
|
||||
|
||||
let w = (X1 - Z1) * (X1 + Z1);
|
||||
let w = w.double() + w;
|
||||
let s = (Y1 * Z1).double();
|
||||
let ss = s.square();
|
||||
let sss = s * ss;
|
||||
let R = Y1 * s;
|
||||
let RR = R.square();
|
||||
let B_ = (X1 * R).double();
|
||||
let h = w.square() - B_.double();
|
||||
let X3 = h * s;
|
||||
let Y3 = w * (B_ - h) - RR.double();
|
||||
let Z3 = sss;
|
||||
|
||||
let res = Self { x: X3, y: Y3, z: Z3 };
|
||||
// If self is identity, res will not be well-formed
|
||||
// Accordingly, we return self if self was the identity
|
||||
Self::conditional_select(&res, self, self.is_identity())
|
||||
}
|
||||
}
|
||||
|
||||
impl Sum<Point> for Point {
|
||||
fn sum<I: Iterator<Item = Point>>(iter: I) -> Point {
|
||||
let mut res = Self::identity();
|
||||
for i in iter {
|
||||
res += i;
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Sum<&'a Point> for Point {
|
||||
fn sum<I: Iterator<Item = &'a Point>>(iter: I) -> Point {
|
||||
Point::sum(iter.cloned())
|
||||
}
|
||||
}
|
||||
|
||||
impl Mul<Scalar> for Point {
|
||||
type Output = Point;
|
||||
fn mul(self, mut other: Scalar) -> Point {
|
||||
// Precompute the optimal amount that's a multiple of 2
|
||||
let mut table = [Point::identity(); 16];
|
||||
table[1] = self;
|
||||
for i in 2 .. 16 {
|
||||
table[i] = table[i - 1] + self;
|
||||
}
|
||||
|
||||
let mut res = Self::identity();
|
||||
let mut bits = 0;
|
||||
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
|
||||
bits <<= 1;
|
||||
let mut bit = u8_from_bool(bit.deref_mut());
|
||||
bits |= bit;
|
||||
bit.zeroize();
|
||||
|
||||
if ((i + 1) % 4) == 0 {
|
||||
if i != 3 {
|
||||
for _ in 0 .. 4 {
|
||||
res = res.double();
|
||||
}
|
||||
}
|
||||
|
||||
let mut term = table[0];
|
||||
for (j, candidate) in table[1 ..].iter().enumerate() {
|
||||
let j = j + 1;
|
||||
term = Self::conditional_select(&term, candidate, usize::from(bits).ct_eq(&j));
|
||||
}
|
||||
res += term;
|
||||
bits = 0;
|
||||
}
|
||||
}
|
||||
other.zeroize();
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl MulAssign<Scalar> for Point {
|
||||
fn mul_assign(&mut self, other: Scalar) {
|
||||
*self = *self * other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Mul<&Scalar> for Point {
|
||||
type Output = Point;
|
||||
fn mul(self, other: &Scalar) -> Point {
|
||||
self * *other
|
||||
}
|
||||
}
|
||||
|
||||
impl MulAssign<&Scalar> for Point {
|
||||
fn mul_assign(&mut self, other: &Scalar) {
|
||||
*self *= *other;
|
||||
}
|
||||
}
|
||||
|
||||
impl GroupEncoding for Point {
|
||||
type Repr = [u8; 32];
|
||||
|
||||
fn from_bytes(bytes: &Self::Repr) -> CtOption<Self> {
|
||||
// Extract and clear the sign bit
|
||||
let mut bytes = *bytes;
|
||||
let sign = Choice::from(bytes[31] >> 7);
|
||||
bytes[31] &= u8::MAX >> 1;
|
||||
|
||||
// Parse x, recover y
|
||||
FieldElement::from_repr(bytes).and_then(|x| {
|
||||
let is_identity = x.is_zero();
|
||||
|
||||
let y = recover_y(x).map(|mut y| {
|
||||
y = <_>::conditional_select(&y, &-y, y.is_odd().ct_eq(&!sign));
|
||||
y
|
||||
});
|
||||
|
||||
// If this the identity, set y to 1
|
||||
let y =
|
||||
CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity);
|
||||
// Create the point if we have a y solution
|
||||
let point = y.map(|y| Point { x, y, z: FieldElement::ONE });
|
||||
|
||||
let not_negative_zero = !(is_identity & sign);
|
||||
// Only return the point if it isn't -0
|
||||
CtOption::conditional_select(
|
||||
&CtOption::new(Point::identity(), 0.into()),
|
||||
&point,
|
||||
not_negative_zero,
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
fn from_bytes_unchecked(bytes: &Self::Repr) -> CtOption<Self> {
|
||||
Point::from_bytes(bytes)
|
||||
}
|
||||
|
||||
fn to_bytes(&self) -> Self::Repr {
|
||||
let Some(z) = Option::<FieldElement>::from(self.z.invert()) else {
|
||||
return [0; 32];
|
||||
};
|
||||
let x = self.x * z;
|
||||
let y = self.y * z;
|
||||
|
||||
let mut res = [0; 32];
|
||||
res.as_mut().copy_from_slice(&x.to_repr());
|
||||
|
||||
// The following conditional select normalizes the sign to 0 when x is 0
|
||||
let y_sign = u8::conditional_select(&y.is_odd().unwrap_u8(), &0, x.ct_eq(&FieldElement::ZERO));
|
||||
res[31] |= y_sign << 7;
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl PrimeGroup for Point {}
|
||||
|
||||
impl ec_divisors::DivisorCurve for Point {
|
||||
type FieldElement = FieldElement;
|
||||
|
||||
fn a() -> Self::FieldElement {
|
||||
-FieldElement::from(3u64)
|
||||
}
|
||||
fn b() -> Self::FieldElement {
|
||||
B()
|
||||
}
|
||||
|
||||
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
|
||||
let z: Self::FieldElement = Option::from(point.z.invert())?;
|
||||
Some((point.x * z, point.y * z))
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_curve() {
|
||||
ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generator() {
|
||||
assert_eq!(
|
||||
Point::generator(),
|
||||
Point::from_bytes(&hex_literal::hex!(
|
||||
"0100000000000000000000000000000000000000000000000000000000000000"
|
||||
))
|
||||
.unwrap()
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_x_is_invalid() {
|
||||
assert!(Option::<FieldElement>::from(recover_y(FieldElement::ZERO)).is_none());
|
||||
}
|
||||
|
||||
// Checks random won't infinitely loop
|
||||
#[test]
|
||||
fn random() {
|
||||
Point::random(&mut rand_core::OsRng);
|
||||
}
|
||||
52
crypto/evrf/embedwards25519/src/scalar.rs
Normal file
52
crypto/evrf/embedwards25519/src/scalar.rs
Normal file
@@ -0,0 +1,52 @@
|
||||
use zeroize::{DefaultIsZeroes, Zeroize};
|
||||
|
||||
use crypto_bigint::{
|
||||
U256, U512,
|
||||
modular::constant_mod::{ResidueParams, Residue},
|
||||
};
|
||||
|
||||
const MODULUS_STR: &str = "0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b";
|
||||
|
||||
impl_modulus!(EmbedwardsQ, U256, MODULUS_STR);
|
||||
type ResidueType = Residue<EmbedwardsQ, { EmbedwardsQ::LIMBS }>;
|
||||
|
||||
/// The Scalar field of Embedwards25519.
|
||||
///
|
||||
/// This is equivalent to the field secp256k1 is defined over.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)]
|
||||
#[repr(C)]
|
||||
pub struct Scalar(pub(crate) ResidueType);
|
||||
|
||||
impl DefaultIsZeroes for Scalar {}
|
||||
|
||||
pub(crate) const MODULUS: U256 = U256::from_be_hex(MODULUS_STR);
|
||||
|
||||
const WIDE_MODULUS: U512 = U512::from_be_hex(concat!(
|
||||
"0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b",
|
||||
));
|
||||
|
||||
field!(
|
||||
Scalar,
|
||||
ResidueType,
|
||||
MODULUS_STR,
|
||||
MODULUS,
|
||||
WIDE_MODULUS,
|
||||
252,
|
||||
10,
|
||||
1,
|
||||
"0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96a",
|
||||
"0000000000000000000000000000000000000000000000000000000000000064",
|
||||
);
|
||||
|
||||
impl Scalar {
|
||||
/// Perform a wide reduction, presumably to obtain a non-biased Scalar field element.
|
||||
pub fn wide_reduce(bytes: [u8; 64]) -> Scalar {
|
||||
Scalar(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref()))))
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_scalar_field() {
|
||||
ff_group_tests::prime_field::test_prime_field_bits::<_, Scalar>(&mut rand_core::OsRng);
|
||||
}
|
||||
33
crypto/evrf/generalized-bulletproofs/Cargo.toml
Normal file
33
crypto/evrf/generalized-bulletproofs/Cargo.toml
Normal file
@@ -0,0 +1,33 @@
|
||||
[package]
|
||||
name = "generalized-bulletproofs"
|
||||
version = "0.1.0"
|
||||
description = "Generalized Bulletproofs"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/generalized-bulletproofs"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
keywords = ["ciphersuite", "ff", "group"]
|
||||
edition = "2021"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
rand_core = { version = "0.6", default-features = false, features = ["std"] }
|
||||
|
||||
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
|
||||
|
||||
blake2 = { version = "0.10", default-features = false, features = ["std"] }
|
||||
|
||||
multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["std", "batch"] }
|
||||
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
|
||||
|
||||
[dev-dependencies]
|
||||
rand_core = { version = "0.6", features = ["getrandom"] }
|
||||
|
||||
transcript = { package = "flexible-transcript", path = "../../transcript", features = ["recommended"] }
|
||||
|
||||
ciphersuite = { path = "../../ciphersuite", features = ["ristretto"] }
|
||||
|
||||
[features]
|
||||
tests = []
|
||||
21
crypto/evrf/generalized-bulletproofs/LICENSE
Normal file
21
crypto/evrf/generalized-bulletproofs/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2021-2024 Luke Parker
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
6
crypto/evrf/generalized-bulletproofs/README.md
Normal file
6
crypto/evrf/generalized-bulletproofs/README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# Generalized Bulletproofs
|
||||
|
||||
An implementation of
|
||||
[Generalized Bulletproofs](https://repo.getmonero.org/monero-project/ccs-proposals/uploads/a9baa50c38c6312efc0fea5c6a188bb9/gbp.pdf),
|
||||
a variant of the Bulletproofs arithmetic circuit statement to support Pedersen
|
||||
vector commitments.
|
||||
@@ -0,0 +1,679 @@
|
||||
use rand_core::{RngCore, CryptoRng};
|
||||
|
||||
use zeroize::{Zeroize, ZeroizeOnDrop};
|
||||
|
||||
use multiexp::{multiexp, multiexp_vartime};
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite};
|
||||
|
||||
use crate::{
|
||||
ScalarVector, PointVector, ProofGenerators, PedersenCommitment, PedersenVectorCommitment,
|
||||
BatchVerifier,
|
||||
transcript::*,
|
||||
lincomb::accumulate_vector,
|
||||
inner_product::{IpError, IpStatement, IpWitness, P},
|
||||
};
|
||||
pub use crate::lincomb::{Variable, LinComb};
|
||||
|
||||
/// An Arithmetic Circuit Statement.
|
||||
///
|
||||
/// Bulletproofs' constraints are of the form
|
||||
/// `aL * aR = aO, WL * aL + WR * aR + WO * aO = WV * V + c`.
|
||||
///
|
||||
/// Generalized Bulletproofs modifies this to
|
||||
/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H = WV * V + c`.
|
||||
///
|
||||
/// We implement the latter, yet represented (for simplicity) as
|
||||
/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H + WV * V + c = 0`.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ArithmeticCircuitStatement<'a, C: Ciphersuite> {
|
||||
generators: ProofGenerators<'a, C>,
|
||||
|
||||
constraints: Vec<LinComb<C::F>>,
|
||||
C: PointVector<C>,
|
||||
V: PointVector<C>,
|
||||
}
|
||||
|
||||
impl<'a, C: Ciphersuite> Zeroize for ArithmeticCircuitStatement<'a, C> {
|
||||
fn zeroize(&mut self) {
|
||||
self.constraints.zeroize();
|
||||
self.C.zeroize();
|
||||
self.V.zeroize();
|
||||
}
|
||||
}
|
||||
|
||||
/// The witness for an arithmetic circuit statement.
|
||||
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
|
||||
pub struct ArithmeticCircuitWitness<C: Ciphersuite> {
|
||||
aL: ScalarVector<C::F>,
|
||||
aR: ScalarVector<C::F>,
|
||||
aO: ScalarVector<C::F>,
|
||||
|
||||
c: Vec<PedersenVectorCommitment<C>>,
|
||||
v: Vec<PedersenCommitment<C>>,
|
||||
}
|
||||
|
||||
/// An error incurred during arithmetic circuit proof operations.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub enum AcError {
|
||||
/// The vectors of scalars which are multiplied against each other were of different lengths.
|
||||
DifferingLrLengths,
|
||||
/// The matrices of constraints are of different lengths.
|
||||
InconsistentAmountOfConstraints,
|
||||
/// A constraint referred to a non-existent term.
|
||||
ConstrainedNonExistentTerm,
|
||||
/// A constraint referred to a non-existent commitment.
|
||||
ConstrainedNonExistentCommitment,
|
||||
/// There weren't enough generators to prove for this statement.
|
||||
NotEnoughGenerators,
|
||||
/// The witness was inconsistent to the statement.
|
||||
///
|
||||
/// Sanity checks on the witness are always performed. If the library is compiled with debug
|
||||
/// assertions on, the satisfaction of all constraints and validity of the commitmentsd is
|
||||
/// additionally checked.
|
||||
InconsistentWitness,
|
||||
/// There was an error from the inner-product proof.
|
||||
Ip(IpError),
|
||||
/// The proof wasn't complete and the necessary values could not be read from the transcript.
|
||||
IncompleteProof,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> ArithmeticCircuitWitness<C> {
|
||||
/// Constructs a new witness instance.
|
||||
pub fn new(
|
||||
aL: ScalarVector<C::F>,
|
||||
aR: ScalarVector<C::F>,
|
||||
c: Vec<PedersenVectorCommitment<C>>,
|
||||
v: Vec<PedersenCommitment<C>>,
|
||||
) -> Result<Self, AcError> {
|
||||
if aL.len() != aR.len() {
|
||||
Err(AcError::DifferingLrLengths)?;
|
||||
}
|
||||
|
||||
// The Pedersen Vector Commitments don't have their variables' lengths checked as they aren't
|
||||
// paired off with each other as aL, aR are
|
||||
|
||||
// The PVC commit function ensures there's enough generators for their amount of terms
|
||||
// If there aren't enough/the same generators when this is proven for, it'll trigger
|
||||
// InconsistentWitness
|
||||
|
||||
let aO = aL.clone() * &aR;
|
||||
Ok(ArithmeticCircuitWitness { aL, aR, aO, c, v })
|
||||
}
|
||||
}
|
||||
|
||||
struct YzChallenges<C: Ciphersuite> {
|
||||
y_inv: ScalarVector<C::F>,
|
||||
z: ScalarVector<C::F>,
|
||||
}
|
||||
|
||||
impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> {
|
||||
// The amount of multiplications performed.
|
||||
fn n(&self) -> usize {
|
||||
self.generators.len()
|
||||
}
|
||||
|
||||
// The amount of constraints.
|
||||
fn q(&self) -> usize {
|
||||
self.constraints.len()
|
||||
}
|
||||
|
||||
// The amount of Pedersen vector commitments.
|
||||
fn c(&self) -> usize {
|
||||
self.C.len()
|
||||
}
|
||||
|
||||
// The amount of Pedersen commitments.
|
||||
fn m(&self) -> usize {
|
||||
self.V.len()
|
||||
}
|
||||
|
||||
/// Create a new ArithmeticCircuitStatement for the specified relationship.
|
||||
///
|
||||
/// The `LinComb`s passed as `constraints` will be bound to evaluate to 0.
|
||||
///
|
||||
/// The constraints are not transcripted. They're expected to be deterministic from the context
|
||||
/// and higher-level statement. If your constraints are variable, you MUST transcript them before
|
||||
/// calling prove/verify.
|
||||
///
|
||||
/// The commitments are expected to have been transcripted extenally to this statement's
|
||||
/// invocation. That's practically ensured by taking a `Commitments` struct here, which is only
|
||||
/// obtainable via a transcript.
|
||||
pub fn new(
|
||||
generators: ProofGenerators<'a, C>,
|
||||
constraints: Vec<LinComb<C::F>>,
|
||||
commitments: Commitments<C>,
|
||||
) -> Result<Self, AcError> {
|
||||
let Commitments { C, V } = commitments;
|
||||
|
||||
for constraint in &constraints {
|
||||
if Some(generators.len()) <= constraint.highest_a_index {
|
||||
Err(AcError::ConstrainedNonExistentTerm)?;
|
||||
}
|
||||
if Some(C.len()) <= constraint.highest_c_index {
|
||||
Err(AcError::ConstrainedNonExistentCommitment)?;
|
||||
}
|
||||
if Some(V.len()) <= constraint.highest_v_index {
|
||||
Err(AcError::ConstrainedNonExistentCommitment)?;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Self { generators, constraints, C, V })
|
||||
}
|
||||
|
||||
fn yz_challenges(&self, y: C::F, z_1: C::F) -> YzChallenges<C> {
|
||||
let y_inv = y.invert().unwrap();
|
||||
let y_inv = ScalarVector::powers(y_inv, self.n());
|
||||
|
||||
// Powers of z *starting with z**1*
|
||||
// We could reuse powers and remove the first element, yet this is cheaper than the shift that
|
||||
// would require
|
||||
let q = self.q();
|
||||
let mut z = ScalarVector(Vec::with_capacity(q));
|
||||
z.0.push(z_1);
|
||||
for _ in 1 .. q {
|
||||
z.0.push(*z.0.last().unwrap() * z_1);
|
||||
}
|
||||
z.0.truncate(q);
|
||||
|
||||
YzChallenges { y_inv, z }
|
||||
}
|
||||
|
||||
/// Prove for this statement/witness.
|
||||
pub fn prove<R: RngCore + CryptoRng>(
|
||||
self,
|
||||
rng: &mut R,
|
||||
transcript: &mut Transcript,
|
||||
mut witness: ArithmeticCircuitWitness<C>,
|
||||
) -> Result<(), AcError> {
|
||||
let n = self.n();
|
||||
let c = self.c();
|
||||
let m = self.m();
|
||||
|
||||
// Check the witness length and pad it to the necessary power of two
|
||||
if witness.aL.len() > n {
|
||||
Err(AcError::NotEnoughGenerators)?;
|
||||
}
|
||||
while witness.aL.len() < n {
|
||||
witness.aL.0.push(C::F::ZERO);
|
||||
witness.aR.0.push(C::F::ZERO);
|
||||
witness.aO.0.push(C::F::ZERO);
|
||||
}
|
||||
for c in &mut witness.c {
|
||||
if c.g_values.len() > n {
|
||||
Err(AcError::NotEnoughGenerators)?;
|
||||
}
|
||||
if c.h_values.len() > n {
|
||||
Err(AcError::NotEnoughGenerators)?;
|
||||
}
|
||||
// The Pedersen vector commitments internally have n terms
|
||||
while c.g_values.len() < n {
|
||||
c.g_values.0.push(C::F::ZERO);
|
||||
}
|
||||
while c.h_values.len() < n {
|
||||
c.h_values.0.push(C::F::ZERO);
|
||||
}
|
||||
}
|
||||
|
||||
// Check the witness's consistency with the statement
|
||||
if (c != witness.c.len()) || (m != witness.v.len()) {
|
||||
Err(AcError::InconsistentWitness)?;
|
||||
}
|
||||
|
||||
#[cfg(debug_assertions)]
|
||||
{
|
||||
for (commitment, opening) in self.V.0.iter().zip(witness.v.iter()) {
|
||||
if *commitment != opening.commit(self.generators.g(), self.generators.h()) {
|
||||
Err(AcError::InconsistentWitness)?;
|
||||
}
|
||||
}
|
||||
for (commitment, opening) in self.C.0.iter().zip(witness.c.iter()) {
|
||||
if Some(*commitment) !=
|
||||
opening.commit(
|
||||
self.generators.g_bold_slice(),
|
||||
self.generators.h_bold_slice(),
|
||||
self.generators.h(),
|
||||
)
|
||||
{
|
||||
Err(AcError::InconsistentWitness)?;
|
||||
}
|
||||
}
|
||||
for constraint in &self.constraints {
|
||||
let eval =
|
||||
constraint
|
||||
.WL
|
||||
.iter()
|
||||
.map(|(i, weight)| *weight * witness.aL[*i])
|
||||
.chain(constraint.WR.iter().map(|(i, weight)| *weight * witness.aR[*i]))
|
||||
.chain(constraint.WO.iter().map(|(i, weight)| *weight * witness.aO[*i]))
|
||||
.chain(
|
||||
constraint.WCG.iter().zip(&witness.c).flat_map(|(weights, c)| {
|
||||
weights.iter().map(|(j, weight)| *weight * c.g_values[*j])
|
||||
}),
|
||||
)
|
||||
.chain(
|
||||
constraint.WCH.iter().zip(&witness.c).flat_map(|(weights, c)| {
|
||||
weights.iter().map(|(j, weight)| *weight * c.h_values[*j])
|
||||
}),
|
||||
)
|
||||
.chain(constraint.WV.iter().map(|(i, weight)| *weight * witness.v[*i].value))
|
||||
.chain(core::iter::once(constraint.c))
|
||||
.sum::<C::F>();
|
||||
|
||||
if eval != C::F::ZERO {
|
||||
Err(AcError::InconsistentWitness)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let alpha = C::F::random(&mut *rng);
|
||||
let beta = C::F::random(&mut *rng);
|
||||
let rho = C::F::random(&mut *rng);
|
||||
|
||||
let AI = {
|
||||
let alg = witness.aL.0.iter().enumerate().map(|(i, aL)| (*aL, self.generators.g_bold(i)));
|
||||
let arh = witness.aR.0.iter().enumerate().map(|(i, aR)| (*aR, self.generators.h_bold(i)));
|
||||
let ah = core::iter::once((alpha, self.generators.h()));
|
||||
let mut AI_terms = alg.chain(arh).chain(ah).collect::<Vec<_>>();
|
||||
let AI = multiexp(&AI_terms);
|
||||
AI_terms.zeroize();
|
||||
AI
|
||||
};
|
||||
let AO = {
|
||||
let aog = witness.aO.0.iter().enumerate().map(|(i, aO)| (*aO, self.generators.g_bold(i)));
|
||||
let bh = core::iter::once((beta, self.generators.h()));
|
||||
let mut AO_terms = aog.chain(bh).collect::<Vec<_>>();
|
||||
let AO = multiexp(&AO_terms);
|
||||
AO_terms.zeroize();
|
||||
AO
|
||||
};
|
||||
|
||||
let mut sL = ScalarVector(Vec::with_capacity(n));
|
||||
let mut sR = ScalarVector(Vec::with_capacity(n));
|
||||
for _ in 0 .. n {
|
||||
sL.0.push(C::F::random(&mut *rng));
|
||||
sR.0.push(C::F::random(&mut *rng));
|
||||
}
|
||||
let S = {
|
||||
let slg = sL.0.iter().enumerate().map(|(i, sL)| (*sL, self.generators.g_bold(i)));
|
||||
let srh = sR.0.iter().enumerate().map(|(i, sR)| (*sR, self.generators.h_bold(i)));
|
||||
let rh = core::iter::once((rho, self.generators.h()));
|
||||
let mut S_terms = slg.chain(srh).chain(rh).collect::<Vec<_>>();
|
||||
let S = multiexp(&S_terms);
|
||||
S_terms.zeroize();
|
||||
S
|
||||
};
|
||||
|
||||
transcript.push_point(AI);
|
||||
transcript.push_point(AO);
|
||||
transcript.push_point(S);
|
||||
let y = transcript.challenge();
|
||||
let z = transcript.challenge();
|
||||
let YzChallenges { y_inv, z } = self.yz_challenges(y, z);
|
||||
let y = ScalarVector::powers(y, n);
|
||||
|
||||
// t is a n'-term polynomial
|
||||
// While Bulletproofs discuss it as a 6-term polynomial, Generalized Bulletproofs re-defines it
|
||||
// as `2(n' + 1)`-term, where `n'` is `2 (c + 1)`.
|
||||
// When `c = 0`, `n' = 2`, and t is `6` (which lines up with Bulletproofs having a 6-term
|
||||
// polynomial).
|
||||
|
||||
// ni = n'
|
||||
let ni = 2 * (c + 1);
|
||||
// These indexes are from the Generalized Bulletproofs paper
|
||||
#[rustfmt::skip]
|
||||
let ilr = ni / 2; // 1 if c = 0
|
||||
#[rustfmt::skip]
|
||||
let io = ni; // 2 if c = 0
|
||||
#[rustfmt::skip]
|
||||
let is = ni + 1; // 3 if c = 0
|
||||
#[rustfmt::skip]
|
||||
let jlr = ni / 2; // 1 if c = 0
|
||||
#[rustfmt::skip]
|
||||
let jo = 0; // 0 if c = 0
|
||||
#[rustfmt::skip]
|
||||
let js = ni + 1; // 3 if c = 0
|
||||
|
||||
// If c = 0, these indexes perfectly align with the stated powers of X from the Bulletproofs
|
||||
// paper for the following coefficients
|
||||
|
||||
// Declare the l and r polynomials, assigning the traditional coefficients to their positions
|
||||
let mut l = vec![];
|
||||
let mut r = vec![];
|
||||
for _ in 0 .. (is + 1) {
|
||||
l.push(ScalarVector::new(0));
|
||||
r.push(ScalarVector::new(0));
|
||||
}
|
||||
|
||||
let mut l_weights = ScalarVector::new(n);
|
||||
let mut r_weights = ScalarVector::new(n);
|
||||
let mut o_weights = ScalarVector::new(n);
|
||||
for (constraint, z) in self.constraints.iter().zip(&z.0) {
|
||||
accumulate_vector(&mut l_weights, &constraint.WL, *z);
|
||||
accumulate_vector(&mut r_weights, &constraint.WR, *z);
|
||||
accumulate_vector(&mut o_weights, &constraint.WO, *z);
|
||||
}
|
||||
|
||||
l[ilr] = (r_weights * &y_inv) + &witness.aL;
|
||||
l[io] = witness.aO.clone();
|
||||
l[is] = sL;
|
||||
r[jlr] = l_weights + &(witness.aR.clone() * &y);
|
||||
r[jo] = o_weights - &y;
|
||||
r[js] = sR * &y;
|
||||
|
||||
// Pad as expected
|
||||
for l in &mut l {
|
||||
debug_assert!((l.len() == 0) || (l.len() == n));
|
||||
if l.len() == 0 {
|
||||
*l = ScalarVector::new(n);
|
||||
}
|
||||
}
|
||||
for r in &mut r {
|
||||
debug_assert!((r.len() == 0) || (r.len() == n));
|
||||
if r.len() == 0 {
|
||||
*r = ScalarVector::new(n);
|
||||
}
|
||||
}
|
||||
|
||||
// We now fill in the vector commitments
|
||||
// We use unused coefficients of l increasing from 0 (skipping ilr), and unused coefficients of
|
||||
// r decreasing from n' (skipping jlr)
|
||||
|
||||
let mut cg_weights = Vec::with_capacity(witness.c.len());
|
||||
let mut ch_weights = Vec::with_capacity(witness.c.len());
|
||||
for i in 0 .. witness.c.len() {
|
||||
let mut cg = ScalarVector::new(n);
|
||||
let mut ch = ScalarVector::new(n);
|
||||
for (constraint, z) in self.constraints.iter().zip(&z.0) {
|
||||
if let Some(WCG) = constraint.WCG.get(i) {
|
||||
accumulate_vector(&mut cg, WCG, *z);
|
||||
}
|
||||
if let Some(WCH) = constraint.WCH.get(i) {
|
||||
accumulate_vector(&mut ch, WCH, *z);
|
||||
}
|
||||
}
|
||||
cg_weights.push(cg);
|
||||
ch_weights.push(ch);
|
||||
}
|
||||
|
||||
for (i, (c, (cg_weights, ch_weights))) in
|
||||
witness.c.iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate()
|
||||
{
|
||||
let i = i + 1;
|
||||
let j = ni - i;
|
||||
|
||||
l[i] = c.g_values.clone();
|
||||
l[j] = ch_weights * &y_inv;
|
||||
r[j] = cg_weights;
|
||||
r[i] = (c.h_values.clone() * &y) + &r[i];
|
||||
}
|
||||
|
||||
// Multiply them to obtain t
|
||||
let mut t = ScalarVector::new(1 + (2 * (l.len() - 1)));
|
||||
for (i, l) in l.iter().enumerate() {
|
||||
for (j, r) in r.iter().enumerate() {
|
||||
let new_coeff = i + j;
|
||||
t[new_coeff] += l.inner_product(r.0.iter());
|
||||
}
|
||||
}
|
||||
|
||||
// Per Bulletproofs, calculate masks tau for each t where (i > 0) && (i != 2)
|
||||
// Per Generalized Bulletproofs, calculate masks tau for each t where i != n'
|
||||
// With Bulletproofs, t[0] is zero, hence its omission, yet Generalized Bulletproofs uses it
|
||||
let mut tau_before_ni = vec![];
|
||||
for _ in 0 .. ni {
|
||||
tau_before_ni.push(C::F::random(&mut *rng));
|
||||
}
|
||||
let mut tau_after_ni = vec![];
|
||||
for _ in 0 .. t.0[(ni + 1) ..].len() {
|
||||
tau_after_ni.push(C::F::random(&mut *rng));
|
||||
}
|
||||
// Calculate commitments to the coefficients of t, blinded by tau
|
||||
debug_assert_eq!(t.0[0 .. ni].len(), tau_before_ni.len());
|
||||
for (t, tau) in t.0[0 .. ni].iter().zip(tau_before_ni.iter()) {
|
||||
transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())]));
|
||||
}
|
||||
debug_assert_eq!(t.0[(ni + 1) ..].len(), tau_after_ni.len());
|
||||
for (t, tau) in t.0[(ni + 1) ..].iter().zip(tau_after_ni.iter()) {
|
||||
transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())]));
|
||||
}
|
||||
|
||||
let x: ScalarVector<C::F> = ScalarVector::powers(transcript.challenge(), t.len());
|
||||
|
||||
let poly_eval = |poly: &[ScalarVector<C::F>], x: &ScalarVector<_>| -> ScalarVector<_> {
|
||||
let mut res = ScalarVector::<C::F>::new(poly[0].0.len());
|
||||
for (i, coeff) in poly.iter().enumerate() {
|
||||
res = res + &(coeff.clone() * x[i]);
|
||||
}
|
||||
res
|
||||
};
|
||||
let l = poly_eval(&l, &x);
|
||||
let r = poly_eval(&r, &x);
|
||||
|
||||
let t_caret = l.inner_product(r.0.iter());
|
||||
|
||||
let mut V_weights = ScalarVector::new(self.V.len());
|
||||
for (constraint, z) in self.constraints.iter().zip(&z.0) {
|
||||
// We use `-z`, not `z`, as we write our constraint as `... + WV V = 0` not `= WV V + ..`
|
||||
// This means we need to subtract `WV V` from both sides, which we accomplish here
|
||||
accumulate_vector(&mut V_weights, &constraint.WV, -*z);
|
||||
}
|
||||
|
||||
let tau_x = {
|
||||
let mut tau_x_poly = vec![];
|
||||
tau_x_poly.extend(tau_before_ni);
|
||||
tau_x_poly.push(V_weights.inner_product(witness.v.iter().map(|v| &v.mask)));
|
||||
tau_x_poly.extend(tau_after_ni);
|
||||
|
||||
let mut tau_x = C::F::ZERO;
|
||||
for (i, coeff) in tau_x_poly.into_iter().enumerate() {
|
||||
tau_x += coeff * x[i];
|
||||
}
|
||||
tau_x
|
||||
};
|
||||
|
||||
// Calculate u for the powers of x variable to ilr/io/is
|
||||
let u = {
|
||||
// Calculate the first part of u
|
||||
let mut u = (alpha * x[ilr]) + (beta * x[io]) + (rho * x[is]);
|
||||
|
||||
// Incorporate the commitment masks multiplied by the associated power of x
|
||||
for (i, commitment) in witness.c.iter().enumerate() {
|
||||
let i = i + 1;
|
||||
u += x[i] * commitment.mask;
|
||||
}
|
||||
u
|
||||
};
|
||||
|
||||
// Use the Inner-Product argument to prove for this
|
||||
// P = t_caret * g + l * g_bold + r * (y_inv * h_bold)
|
||||
|
||||
let mut P_terms = Vec::with_capacity(1 + (2 * self.generators.len()));
|
||||
debug_assert_eq!(l.len(), r.len());
|
||||
for (i, (l, r)) in l.0.iter().zip(r.0.iter()).enumerate() {
|
||||
P_terms.push((*l, self.generators.g_bold(i)));
|
||||
P_terms.push((y_inv[i] * r, self.generators.h_bold(i)));
|
||||
}
|
||||
|
||||
// Protocol 1, inlined, since our IpStatement is for Protocol 2
|
||||
transcript.push_scalar(tau_x);
|
||||
transcript.push_scalar(u);
|
||||
transcript.push_scalar(t_caret);
|
||||
let ip_x = transcript.challenge();
|
||||
P_terms.push((ip_x * t_caret, self.generators.g()));
|
||||
IpStatement::new(
|
||||
self.generators,
|
||||
y_inv,
|
||||
ip_x,
|
||||
// Safe since IpStatement isn't a ZK proof
|
||||
P::Prover(multiexp_vartime(&P_terms)),
|
||||
)
|
||||
.unwrap()
|
||||
.prove(transcript, IpWitness::new(l, r).unwrap())
|
||||
.map_err(AcError::Ip)
|
||||
}
|
||||
|
||||
/// Verify a proof for this statement.
|
||||
pub fn verify<R: RngCore + CryptoRng>(
|
||||
self,
|
||||
rng: &mut R,
|
||||
verifier: &mut BatchVerifier<C>,
|
||||
transcript: &mut VerifierTranscript,
|
||||
) -> Result<(), AcError> {
|
||||
let n = self.n();
|
||||
let c = self.c();
|
||||
|
||||
let ni = 2 * (c + 1);
|
||||
|
||||
let ilr = ni / 2;
|
||||
let io = ni;
|
||||
let is = ni + 1;
|
||||
let jlr = ni / 2;
|
||||
|
||||
let l_r_poly_len = 1 + ni + 1;
|
||||
let t_poly_len = (2 * l_r_poly_len) - 1;
|
||||
|
||||
let AI = transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?;
|
||||
let AO = transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?;
|
||||
let S = transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?;
|
||||
let y = transcript.challenge();
|
||||
let z = transcript.challenge();
|
||||
let YzChallenges { y_inv, z } = self.yz_challenges(y, z);
|
||||
|
||||
let mut l_weights = ScalarVector::new(n);
|
||||
let mut r_weights = ScalarVector::new(n);
|
||||
let mut o_weights = ScalarVector::new(n);
|
||||
for (constraint, z) in self.constraints.iter().zip(&z.0) {
|
||||
accumulate_vector(&mut l_weights, &constraint.WL, *z);
|
||||
accumulate_vector(&mut r_weights, &constraint.WR, *z);
|
||||
accumulate_vector(&mut o_weights, &constraint.WO, *z);
|
||||
}
|
||||
let r_weights = r_weights * &y_inv;
|
||||
|
||||
let delta = r_weights.inner_product(l_weights.0.iter());
|
||||
|
||||
let mut T_before_ni = Vec::with_capacity(ni);
|
||||
let mut T_after_ni = Vec::with_capacity(t_poly_len - ni - 1);
|
||||
for _ in 0 .. ni {
|
||||
T_before_ni.push(transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?);
|
||||
}
|
||||
for _ in 0 .. (t_poly_len - ni - 1) {
|
||||
T_after_ni.push(transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?);
|
||||
}
|
||||
let x: ScalarVector<C::F> = ScalarVector::powers(transcript.challenge(), t_poly_len);
|
||||
|
||||
let tau_x = transcript.read_scalar::<C>().map_err(|_| AcError::IncompleteProof)?;
|
||||
let u = transcript.read_scalar::<C>().map_err(|_| AcError::IncompleteProof)?;
|
||||
let t_caret = transcript.read_scalar::<C>().map_err(|_| AcError::IncompleteProof)?;
|
||||
|
||||
// Lines 88-90, modified per Generalized Bulletproofs as needed w.r.t. t
|
||||
{
|
||||
let verifier_weight = C::F::random(&mut *rng);
|
||||
// lhs of the equation, weighted to enable batch verification
|
||||
verifier.g += t_caret * verifier_weight;
|
||||
verifier.h += tau_x * verifier_weight;
|
||||
|
||||
let mut V_weights = ScalarVector::new(self.V.len());
|
||||
for (constraint, z) in self.constraints.iter().zip(&z.0) {
|
||||
// We use `-z`, not `z`, as we write our constraint as `... + WV V = 0` not `= WV V + ..`
|
||||
// This means we need to subtract `WV V` from both sides, which we accomplish here
|
||||
accumulate_vector(&mut V_weights, &constraint.WV, -*z);
|
||||
}
|
||||
V_weights = V_weights * x[ni];
|
||||
|
||||
// rhs of the equation, negated to cause a sum to zero
|
||||
// `delta - z...`, instead of `delta + z...`, is done for the same reason as in the above WV
|
||||
// matrix transform
|
||||
verifier.g -= verifier_weight *
|
||||
x[ni] *
|
||||
(delta - z.inner_product(self.constraints.iter().map(|constraint| &constraint.c)));
|
||||
for pair in V_weights.0.into_iter().zip(self.V.0) {
|
||||
verifier.additional.push((-verifier_weight * pair.0, pair.1));
|
||||
}
|
||||
for (i, T) in T_before_ni.into_iter().enumerate() {
|
||||
verifier.additional.push((-verifier_weight * x[i], T));
|
||||
}
|
||||
for (i, T) in T_after_ni.into_iter().enumerate() {
|
||||
verifier.additional.push((-verifier_weight * x[ni + 1 + i], T));
|
||||
}
|
||||
}
|
||||
|
||||
let verifier_weight = C::F::random(&mut *rng);
|
||||
// Multiply `x` by `verifier_weight` as this effects `verifier_weight` onto most scalars and
|
||||
// saves a notable amount of operations
|
||||
let x = x * verifier_weight;
|
||||
|
||||
// This following block effectively calculates P, within the multiexp
|
||||
{
|
||||
verifier.additional.push((x[ilr], AI));
|
||||
verifier.additional.push((x[io], AO));
|
||||
// h' ** y is equivalent to h as h' is h ** y_inv
|
||||
let mut log2_n = 0;
|
||||
while (1 << log2_n) != n {
|
||||
log2_n += 1;
|
||||
}
|
||||
verifier.h_sum[log2_n] -= verifier_weight;
|
||||
verifier.additional.push((x[is], S));
|
||||
|
||||
// Lines 85-87 calculate WL, WR, WO
|
||||
// We preserve them in terms of g_bold and h_bold for a more efficient multiexp
|
||||
let mut h_bold_scalars = l_weights * x[jlr];
|
||||
for (i, wr) in (r_weights * x[jlr]).0.into_iter().enumerate() {
|
||||
verifier.g_bold[i] += wr;
|
||||
}
|
||||
// WO is weighted by x**jo where jo == 0, hence why we can ignore the x term
|
||||
h_bold_scalars = h_bold_scalars + &(o_weights * verifier_weight);
|
||||
|
||||
let mut cg_weights = Vec::with_capacity(self.C.len());
|
||||
let mut ch_weights = Vec::with_capacity(self.C.len());
|
||||
for i in 0 .. self.C.len() {
|
||||
let mut cg = ScalarVector::new(n);
|
||||
let mut ch = ScalarVector::new(n);
|
||||
for (constraint, z) in self.constraints.iter().zip(&z.0) {
|
||||
if let Some(WCG) = constraint.WCG.get(i) {
|
||||
accumulate_vector(&mut cg, WCG, *z);
|
||||
}
|
||||
if let Some(WCH) = constraint.WCH.get(i) {
|
||||
accumulate_vector(&mut ch, WCH, *z);
|
||||
}
|
||||
}
|
||||
cg_weights.push(cg);
|
||||
ch_weights.push(ch);
|
||||
}
|
||||
|
||||
// Push the terms for C, which increment from 0, and the terms for WC, which decrement from
|
||||
// n'
|
||||
for (i, (C, (WCG, WCH))) in
|
||||
self.C.0.into_iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate()
|
||||
{
|
||||
let i = i + 1;
|
||||
let j = ni - i;
|
||||
verifier.additional.push((x[i], C));
|
||||
h_bold_scalars = h_bold_scalars + &(WCG * x[j]);
|
||||
for (i, scalar) in (WCH * &y_inv * x[j]).0.into_iter().enumerate() {
|
||||
verifier.g_bold[i] += scalar;
|
||||
}
|
||||
}
|
||||
|
||||
// All terms for h_bold here have actually been for h_bold', h_bold * y_inv
|
||||
h_bold_scalars = h_bold_scalars * &y_inv;
|
||||
for (i, scalar) in h_bold_scalars.0.into_iter().enumerate() {
|
||||
verifier.h_bold[i] += scalar;
|
||||
}
|
||||
|
||||
// Remove u * h from P
|
||||
verifier.h -= verifier_weight * u;
|
||||
}
|
||||
|
||||
// Prove for lines 88, 92 with an Inner-Product statement
|
||||
// This inlines Protocol 1, as our IpStatement implements Protocol 2
|
||||
let ip_x = transcript.challenge();
|
||||
// P is amended with this additional term
|
||||
verifier.g += verifier_weight * ip_x * t_caret;
|
||||
IpStatement::new(self.generators, y_inv, ip_x, P::Verifier { verifier_weight })
|
||||
.unwrap()
|
||||
.verify(verifier, transcript)
|
||||
.map_err(AcError::Ip)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
360
crypto/evrf/generalized-bulletproofs/src/inner_product.rs
Normal file
360
crypto/evrf/generalized-bulletproofs/src/inner_product.rs
Normal file
@@ -0,0 +1,360 @@
|
||||
use multiexp::multiexp_vartime;
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite};
|
||||
|
||||
#[rustfmt::skip]
|
||||
use crate::{ScalarVector, PointVector, ProofGenerators, BatchVerifier, transcript::*, padded_pow_of_2};
|
||||
|
||||
/// An error from proving/verifying Inner-Product statements.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub enum IpError {
|
||||
/// An incorrect amount of generators was provided.
|
||||
IncorrectAmountOfGenerators,
|
||||
/// The witness was inconsistent to the statement.
|
||||
///
|
||||
/// Sanity checks on the witness are always performed. If the library is compiled with debug
|
||||
/// assertions on, whether or not this witness actually opens `P` is checked.
|
||||
InconsistentWitness,
|
||||
/// The proof wasn't complete and the necessary values could not be read from the transcript.
|
||||
IncompleteProof,
|
||||
}
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub(crate) enum P<C: Ciphersuite> {
|
||||
Verifier { verifier_weight: C::F },
|
||||
Prover(C::G),
|
||||
}
|
||||
|
||||
/// The Bulletproofs Inner-Product statement.
|
||||
///
|
||||
/// This is for usage with Protocol 2 from the Bulletproofs paper.
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct IpStatement<'a, C: Ciphersuite> {
|
||||
generators: ProofGenerators<'a, C>,
|
||||
// Weights for h_bold
|
||||
h_bold_weights: ScalarVector<C::F>,
|
||||
// u as the discrete logarithm of G
|
||||
u: C::F,
|
||||
// P
|
||||
P: P<C>,
|
||||
}
|
||||
|
||||
/// The witness for the Bulletproofs Inner-Product statement.
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct IpWitness<C: Ciphersuite> {
|
||||
// a
|
||||
a: ScalarVector<C::F>,
|
||||
// b
|
||||
b: ScalarVector<C::F>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> IpWitness<C> {
|
||||
/// Construct a new witness for an Inner-Product statement.
|
||||
///
|
||||
/// If the witness is less than a power of two, it is padded to the nearest power of two.
|
||||
///
|
||||
/// This functions return None if the lengths of a, b are mismatched or either are empty.
|
||||
pub(crate) fn new(mut a: ScalarVector<C::F>, mut b: ScalarVector<C::F>) -> Option<Self> {
|
||||
if a.0.is_empty() || (a.len() != b.len()) {
|
||||
None?;
|
||||
}
|
||||
|
||||
// Pad to the nearest power of 2
|
||||
let missing = padded_pow_of_2(a.len()) - a.len();
|
||||
a.0.reserve(missing);
|
||||
b.0.reserve(missing);
|
||||
for _ in 0 .. missing {
|
||||
a.0.push(C::F::ZERO);
|
||||
b.0.push(C::F::ZERO);
|
||||
}
|
||||
|
||||
Some(Self { a, b })
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, C: Ciphersuite> IpStatement<'a, C> {
|
||||
/// Create a new Inner-Product statement.
|
||||
///
|
||||
/// This does not perform any transcripting of any variables within this statement. They must be
|
||||
/// deterministic to the existing transcript.
|
||||
pub(crate) fn new(
|
||||
generators: ProofGenerators<'a, C>,
|
||||
h_bold_weights: ScalarVector<C::F>,
|
||||
u: C::F,
|
||||
P: P<C>,
|
||||
) -> Result<Self, IpError> {
|
||||
if generators.h_bold_slice().len() != h_bold_weights.len() {
|
||||
Err(IpError::IncorrectAmountOfGenerators)?
|
||||
}
|
||||
Ok(Self { generators, h_bold_weights, u, P })
|
||||
}
|
||||
|
||||
/// Prove for this Inner-Product statement.
|
||||
///
|
||||
/// Returns an error if this statement couldn't be proven for (such as if the witness isn't
|
||||
/// consistent).
|
||||
pub(crate) fn prove(
|
||||
self,
|
||||
transcript: &mut Transcript,
|
||||
witness: IpWitness<C>,
|
||||
) -> Result<(), IpError> {
|
||||
let (mut g_bold, mut h_bold, u, mut P, mut a, mut b) = {
|
||||
let IpStatement { generators, h_bold_weights, u, P } = self;
|
||||
let u = generators.g() * u;
|
||||
|
||||
// Ensure we have the exact amount of generators
|
||||
if generators.g_bold_slice().len() != witness.a.len() {
|
||||
Err(IpError::IncorrectAmountOfGenerators)?;
|
||||
}
|
||||
// Acquire a local copy of the generators
|
||||
let g_bold = PointVector::<C>(generators.g_bold_slice().to_vec());
|
||||
let h_bold = PointVector::<C>(generators.h_bold_slice().to_vec()).mul_vec(&h_bold_weights);
|
||||
|
||||
let IpWitness { a, b } = witness;
|
||||
|
||||
let P = match P {
|
||||
P::Prover(point) => point,
|
||||
P::Verifier { .. } => {
|
||||
panic!("prove called with a P specification which was for the verifier")
|
||||
}
|
||||
};
|
||||
|
||||
// Ensure this witness actually opens this statement
|
||||
#[cfg(debug_assertions)]
|
||||
{
|
||||
let ag = a.0.iter().cloned().zip(g_bold.0.iter().cloned());
|
||||
let bh = b.0.iter().cloned().zip(h_bold.0.iter().cloned());
|
||||
let cu = core::iter::once((a.inner_product(b.0.iter()), u));
|
||||
if P != multiexp_vartime(&ag.chain(bh).chain(cu).collect::<Vec<_>>()) {
|
||||
Err(IpError::InconsistentWitness)?;
|
||||
}
|
||||
}
|
||||
|
||||
(g_bold, h_bold, u, P, a, b)
|
||||
};
|
||||
|
||||
// `else: (n > 1)` case, lines 18-35 of the Bulletproofs paper
|
||||
// This interprets `g_bold.len()` as `n`
|
||||
while g_bold.len() > 1 {
|
||||
// Split a, b, g_bold, h_bold as needed for lines 20-24
|
||||
let (a1, a2) = a.clone().split();
|
||||
let (b1, b2) = b.clone().split();
|
||||
|
||||
let (g_bold1, g_bold2) = g_bold.split();
|
||||
let (h_bold1, h_bold2) = h_bold.split();
|
||||
|
||||
let n_hat = g_bold1.len();
|
||||
|
||||
// Sanity
|
||||
debug_assert_eq!(a1.len(), n_hat);
|
||||
debug_assert_eq!(a2.len(), n_hat);
|
||||
debug_assert_eq!(b1.len(), n_hat);
|
||||
debug_assert_eq!(b2.len(), n_hat);
|
||||
debug_assert_eq!(g_bold1.len(), n_hat);
|
||||
debug_assert_eq!(g_bold2.len(), n_hat);
|
||||
debug_assert_eq!(h_bold1.len(), n_hat);
|
||||
debug_assert_eq!(h_bold2.len(), n_hat);
|
||||
|
||||
// cl, cr, lines 21-22
|
||||
let cl = a1.inner_product(b2.0.iter());
|
||||
let cr = a2.inner_product(b1.0.iter());
|
||||
|
||||
let L = {
|
||||
let mut L_terms = Vec::with_capacity(1 + (2 * g_bold1.len()));
|
||||
for (a, g) in a1.0.iter().zip(g_bold2.0.iter()) {
|
||||
L_terms.push((*a, *g));
|
||||
}
|
||||
for (b, h) in b2.0.iter().zip(h_bold1.0.iter()) {
|
||||
L_terms.push((*b, *h));
|
||||
}
|
||||
L_terms.push((cl, u));
|
||||
// Uses vartime since this isn't a ZK proof
|
||||
multiexp_vartime(&L_terms)
|
||||
};
|
||||
|
||||
let R = {
|
||||
let mut R_terms = Vec::with_capacity(1 + (2 * g_bold1.len()));
|
||||
for (a, g) in a2.0.iter().zip(g_bold1.0.iter()) {
|
||||
R_terms.push((*a, *g));
|
||||
}
|
||||
for (b, h) in b1.0.iter().zip(h_bold2.0.iter()) {
|
||||
R_terms.push((*b, *h));
|
||||
}
|
||||
R_terms.push((cr, u));
|
||||
multiexp_vartime(&R_terms)
|
||||
};
|
||||
|
||||
// Now that we've calculate L, R, transcript them to receive x (26-27)
|
||||
transcript.push_point(L);
|
||||
transcript.push_point(R);
|
||||
let x: C::F = transcript.challenge();
|
||||
let x_inv = x.invert().unwrap();
|
||||
|
||||
// The prover and verifier now calculate the following (28-31)
|
||||
g_bold = PointVector(Vec::with_capacity(g_bold1.len()));
|
||||
for (a, b) in g_bold1.0.into_iter().zip(g_bold2.0.into_iter()) {
|
||||
g_bold.0.push(multiexp_vartime(&[(x_inv, a), (x, b)]));
|
||||
}
|
||||
h_bold = PointVector(Vec::with_capacity(h_bold1.len()));
|
||||
for (a, b) in h_bold1.0.into_iter().zip(h_bold2.0.into_iter()) {
|
||||
h_bold.0.push(multiexp_vartime(&[(x, a), (x_inv, b)]));
|
||||
}
|
||||
P = (L * (x * x)) + P + (R * (x_inv * x_inv));
|
||||
|
||||
// 32-34
|
||||
a = (a1 * x) + &(a2 * x_inv);
|
||||
b = (b1 * x_inv) + &(b2 * x);
|
||||
}
|
||||
|
||||
// `if n = 1` case from line 14-17
|
||||
|
||||
// Sanity
|
||||
debug_assert_eq!(g_bold.len(), 1);
|
||||
debug_assert_eq!(h_bold.len(), 1);
|
||||
debug_assert_eq!(a.len(), 1);
|
||||
debug_assert_eq!(b.len(), 1);
|
||||
|
||||
// We simply send a/b
|
||||
transcript.push_scalar(a[0]);
|
||||
transcript.push_scalar(b[0]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/*
|
||||
This has room for optimization worth investigating further. It currently takes
|
||||
an iterative approach. It can be optimized further via divide and conquer.
|
||||
|
||||
Assume there are 4 challenges.
|
||||
|
||||
Iterative approach (current):
|
||||
1. Do the optimal multiplications across challenge column 0 and 1.
|
||||
2. Do the optimal multiplications across that result and column 2.
|
||||
3. Do the optimal multiplications across that result and column 3.
|
||||
|
||||
Divide and conquer (worth investigating further):
|
||||
1. Do the optimal multiplications across challenge column 0 and 1.
|
||||
2. Do the optimal multiplications across challenge column 2 and 3.
|
||||
3. Multiply both results together.
|
||||
|
||||
When there are 4 challenges (n=16), the iterative approach does 28 multiplications
|
||||
versus divide and conquer's 24.
|
||||
*/
|
||||
fn challenge_products(challenges: &[(C::F, C::F)]) -> Vec<C::F> {
|
||||
let mut products = vec![C::F::ONE; 1 << challenges.len()];
|
||||
|
||||
if !challenges.is_empty() {
|
||||
products[0] = challenges[0].1;
|
||||
products[1] = challenges[0].0;
|
||||
|
||||
for (j, challenge) in challenges.iter().enumerate().skip(1) {
|
||||
let mut slots = (1 << (j + 1)) - 1;
|
||||
while slots > 0 {
|
||||
products[slots] = products[slots / 2] * challenge.0;
|
||||
products[slots - 1] = products[slots / 2] * challenge.1;
|
||||
|
||||
slots = slots.saturating_sub(2);
|
||||
}
|
||||
}
|
||||
|
||||
// Sanity check since if the above failed to populate, it'd be critical
|
||||
for product in &products {
|
||||
debug_assert!(!bool::from(product.is_zero()));
|
||||
}
|
||||
}
|
||||
|
||||
products
|
||||
}
|
||||
|
||||
/// Queue an Inner-Product proof for batch verification.
|
||||
///
|
||||
/// This will return Err if there is an error. This will return Ok if the proof was successfully
|
||||
/// queued for batch verification. The caller is required to verify the batch in order to ensure
|
||||
/// the proof is actually correct.
|
||||
pub(crate) fn verify(
|
||||
self,
|
||||
verifier: &mut BatchVerifier<C>,
|
||||
transcript: &mut VerifierTranscript,
|
||||
) -> Result<(), IpError> {
|
||||
let IpStatement { generators, h_bold_weights, u, P } = self;
|
||||
|
||||
// Calculate the discrete log w.r.t. 2 for the amount of generators present
|
||||
let mut lr_len = 0;
|
||||
while (1 << lr_len) < generators.g_bold_slice().len() {
|
||||
lr_len += 1;
|
||||
}
|
||||
|
||||
let weight = match P {
|
||||
P::Prover(_) => panic!("prove called with a P specification which was for the prover"),
|
||||
P::Verifier { verifier_weight } => verifier_weight,
|
||||
};
|
||||
|
||||
// Again, we start with the `else: (n > 1)` case
|
||||
|
||||
// We need x, x_inv per lines 25-27 for lines 28-31
|
||||
let mut L = Vec::with_capacity(lr_len);
|
||||
let mut R = Vec::with_capacity(lr_len);
|
||||
let mut xs: Vec<C::F> = Vec::with_capacity(lr_len);
|
||||
for _ in 0 .. lr_len {
|
||||
L.push(transcript.read_point::<C>().map_err(|_| IpError::IncompleteProof)?);
|
||||
R.push(transcript.read_point::<C>().map_err(|_| IpError::IncompleteProof)?);
|
||||
xs.push(transcript.challenge());
|
||||
}
|
||||
|
||||
// We calculate their inverse in batch
|
||||
let mut x_invs = xs.clone();
|
||||
{
|
||||
let mut scratch = vec![C::F::ZERO; x_invs.len()];
|
||||
ciphersuite::group::ff::BatchInverter::invert_with_external_scratch(
|
||||
&mut x_invs,
|
||||
&mut scratch,
|
||||
);
|
||||
}
|
||||
|
||||
// Now, with x and x_inv, we need to calculate g_bold', h_bold', P'
|
||||
//
|
||||
// For the sake of performance, we solely want to calculate all of these in terms of scalings
|
||||
// for g_bold, h_bold, P, and don't want to actually perform intermediary scalings of the
|
||||
// points
|
||||
//
|
||||
// L and R are easy, as it's simply x**2, x**-2
|
||||
//
|
||||
// For the series of g_bold, h_bold, we use the `challenge_products` function
|
||||
// For how that works, please see its own documentation
|
||||
let product_cache = {
|
||||
let mut challenges = Vec::with_capacity(lr_len);
|
||||
|
||||
let x_iter = xs.into_iter().zip(x_invs);
|
||||
let lr_iter = L.into_iter().zip(R);
|
||||
for ((x, x_inv), (L, R)) in x_iter.zip(lr_iter) {
|
||||
challenges.push((x, x_inv));
|
||||
verifier.additional.push((weight * x.square(), L));
|
||||
verifier.additional.push((weight * x_inv.square(), R));
|
||||
}
|
||||
|
||||
Self::challenge_products(&challenges)
|
||||
};
|
||||
|
||||
// And now for the `if n = 1` case
|
||||
let a = transcript.read_scalar::<C>().map_err(|_| IpError::IncompleteProof)?;
|
||||
let b = transcript.read_scalar::<C>().map_err(|_| IpError::IncompleteProof)?;
|
||||
let c = a * b;
|
||||
|
||||
// The multiexp of these terms equate to the final permutation of P
|
||||
// We now add terms for a * g_bold' + b * h_bold' b + c * u, with the scalars negative such
|
||||
// that the terms sum to 0 for an honest prover
|
||||
|
||||
// The g_bold * a term case from line 16
|
||||
#[allow(clippy::needless_range_loop)]
|
||||
for i in 0 .. generators.g_bold_slice().len() {
|
||||
verifier.g_bold[i] -= weight * product_cache[i] * a;
|
||||
}
|
||||
// The h_bold * b term case from line 16
|
||||
for i in 0 .. generators.h_bold_slice().len() {
|
||||
verifier.h_bold[i] -=
|
||||
weight * product_cache[product_cache.len() - 1 - i] * b * h_bold_weights[i];
|
||||
}
|
||||
// The c * u term case from line 16
|
||||
verifier.g -= weight * c * u;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
328
crypto/evrf/generalized-bulletproofs/src/lib.rs
Normal file
328
crypto/evrf/generalized-bulletproofs/src/lib.rs
Normal file
@@ -0,0 +1,328 @@
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
#![doc = include_str!("../README.md")]
|
||||
#![deny(missing_docs)]
|
||||
#![allow(non_snake_case)]
|
||||
|
||||
use core::fmt;
|
||||
use std::collections::HashSet;
|
||||
|
||||
use zeroize::Zeroize;
|
||||
|
||||
use multiexp::{multiexp, multiexp_vartime};
|
||||
use ciphersuite::{
|
||||
group::{ff::Field, Group, GroupEncoding},
|
||||
Ciphersuite,
|
||||
};
|
||||
|
||||
mod scalar_vector;
|
||||
pub use scalar_vector::ScalarVector;
|
||||
mod point_vector;
|
||||
pub use point_vector::PointVector;
|
||||
|
||||
/// The transcript formats.
|
||||
pub mod transcript;
|
||||
|
||||
pub(crate) mod inner_product;
|
||||
|
||||
pub(crate) mod lincomb;
|
||||
|
||||
/// The arithmetic circuit proof.
|
||||
pub mod arithmetic_circuit_proof;
|
||||
|
||||
/// Functionlity useful when testing.
|
||||
#[cfg(any(test, feature = "tests"))]
|
||||
pub mod tests;
|
||||
|
||||
/// Calculate the nearest power of two greater than or equivalent to the argument.
|
||||
pub(crate) fn padded_pow_of_2(i: usize) -> usize {
|
||||
let mut next_pow_of_2 = 1;
|
||||
while next_pow_of_2 < i {
|
||||
next_pow_of_2 <<= 1;
|
||||
}
|
||||
next_pow_of_2
|
||||
}
|
||||
|
||||
/// An error from working with generators.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub enum GeneratorsError {
|
||||
/// The provided list of generators for `g` (bold) was empty.
|
||||
GBoldEmpty,
|
||||
/// The provided list of generators for `h` (bold) did not match `g` (bold) in length.
|
||||
DifferingGhBoldLengths,
|
||||
/// The amount of provided generators were not a power of two.
|
||||
NotPowerOfTwo,
|
||||
/// A generator was used multiple times.
|
||||
DuplicatedGenerator,
|
||||
}
|
||||
|
||||
/// A full set of generators.
|
||||
#[derive(Clone)]
|
||||
pub struct Generators<C: Ciphersuite> {
|
||||
g: C::G,
|
||||
h: C::G,
|
||||
|
||||
g_bold: Vec<C::G>,
|
||||
h_bold: Vec<C::G>,
|
||||
h_sum: Vec<C::G>,
|
||||
}
|
||||
|
||||
/// A batch verifier of proofs.
|
||||
#[must_use]
|
||||
#[derive(Clone)]
|
||||
pub struct BatchVerifier<C: Ciphersuite> {
|
||||
g: C::F,
|
||||
h: C::F,
|
||||
|
||||
g_bold: Vec<C::F>,
|
||||
h_bold: Vec<C::F>,
|
||||
h_sum: Vec<C::F>,
|
||||
|
||||
additional: Vec<(C::F, C::G)>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> fmt::Debug for Generators<C> {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
let g = self.g.to_bytes();
|
||||
let g: &[u8] = g.as_ref();
|
||||
|
||||
let h = self.h.to_bytes();
|
||||
let h: &[u8] = h.as_ref();
|
||||
|
||||
fmt.debug_struct("Generators").field("g", &g).field("h", &h).finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
/// The generators for a specific proof.
|
||||
///
|
||||
/// This potentially have been reduced in size from the original set of generators, as beneficial
|
||||
/// to performance.
|
||||
#[derive(Copy, Clone)]
|
||||
pub struct ProofGenerators<'a, C: Ciphersuite> {
|
||||
g: &'a C::G,
|
||||
h: &'a C::G,
|
||||
|
||||
g_bold: &'a [C::G],
|
||||
h_bold: &'a [C::G],
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> fmt::Debug for ProofGenerators<'_, C> {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
let g = self.g.to_bytes();
|
||||
let g: &[u8] = g.as_ref();
|
||||
|
||||
let h = self.h.to_bytes();
|
||||
let h: &[u8] = h.as_ref();
|
||||
|
||||
fmt.debug_struct("ProofGenerators").field("g", &g).field("h", &h).finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Generators<C> {
|
||||
/// Construct an instance of Generators for usage with Bulletproofs.
|
||||
pub fn new(
|
||||
g: C::G,
|
||||
h: C::G,
|
||||
g_bold: Vec<C::G>,
|
||||
h_bold: Vec<C::G>,
|
||||
) -> Result<Self, GeneratorsError> {
|
||||
if g_bold.is_empty() {
|
||||
Err(GeneratorsError::GBoldEmpty)?;
|
||||
}
|
||||
if g_bold.len() != h_bold.len() {
|
||||
Err(GeneratorsError::DifferingGhBoldLengths)?;
|
||||
}
|
||||
if padded_pow_of_2(g_bold.len()) != g_bold.len() {
|
||||
Err(GeneratorsError::NotPowerOfTwo)?;
|
||||
}
|
||||
|
||||
let mut set = HashSet::new();
|
||||
let mut add_generator = |generator: &C::G| {
|
||||
assert!(!bool::from(generator.is_identity()));
|
||||
let bytes = generator.to_bytes();
|
||||
!set.insert(bytes.as_ref().to_vec())
|
||||
};
|
||||
|
||||
assert!(!add_generator(&g), "g was prior present in empty set");
|
||||
if add_generator(&h) {
|
||||
Err(GeneratorsError::DuplicatedGenerator)?;
|
||||
}
|
||||
for g in &g_bold {
|
||||
if add_generator(g) {
|
||||
Err(GeneratorsError::DuplicatedGenerator)?;
|
||||
}
|
||||
}
|
||||
for h in &h_bold {
|
||||
if add_generator(h) {
|
||||
Err(GeneratorsError::DuplicatedGenerator)?;
|
||||
}
|
||||
}
|
||||
|
||||
let mut running_h_sum = C::G::identity();
|
||||
let mut h_sum = vec![];
|
||||
let mut next_pow_of_2 = 1;
|
||||
for (i, h) in h_bold.iter().enumerate() {
|
||||
running_h_sum += h;
|
||||
if (i + 1) == next_pow_of_2 {
|
||||
h_sum.push(running_h_sum);
|
||||
next_pow_of_2 *= 2;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Generators { g, h, g_bold, h_bold, h_sum })
|
||||
}
|
||||
|
||||
/// Create a BatchVerifier for proofs which use these generators.
|
||||
pub fn batch_verifier(&self) -> BatchVerifier<C> {
|
||||
BatchVerifier {
|
||||
g: C::F::ZERO,
|
||||
h: C::F::ZERO,
|
||||
|
||||
g_bold: vec![C::F::ZERO; self.g_bold.len()],
|
||||
h_bold: vec![C::F::ZERO; self.h_bold.len()],
|
||||
h_sum: vec![C::F::ZERO; self.h_sum.len()],
|
||||
|
||||
additional: Vec::with_capacity(128),
|
||||
}
|
||||
}
|
||||
|
||||
/// Verify all proofs queued for batch verification in this BatchVerifier.
|
||||
#[must_use]
|
||||
pub fn verify(&self, verifier: BatchVerifier<C>) -> bool {
|
||||
multiexp_vartime(
|
||||
&[(verifier.g, self.g), (verifier.h, self.h)]
|
||||
.into_iter()
|
||||
.chain(verifier.g_bold.into_iter().zip(self.g_bold.iter().cloned()))
|
||||
.chain(verifier.h_bold.into_iter().zip(self.h_bold.iter().cloned()))
|
||||
.chain(verifier.h_sum.into_iter().zip(self.h_sum.iter().cloned()))
|
||||
.chain(verifier.additional)
|
||||
.collect::<Vec<_>>(),
|
||||
)
|
||||
.is_identity()
|
||||
.into()
|
||||
}
|
||||
|
||||
/// The `g` generator.
|
||||
pub fn g(&self) -> C::G {
|
||||
self.g
|
||||
}
|
||||
|
||||
/// The `h` generator.
|
||||
pub fn h(&self) -> C::G {
|
||||
self.h
|
||||
}
|
||||
|
||||
/// A slice to view the `g` (bold) generators.
|
||||
pub fn g_bold_slice(&self) -> &[C::G] {
|
||||
&self.g_bold
|
||||
}
|
||||
|
||||
/// A slice to view the `h` (bold) generators.
|
||||
pub fn h_bold_slice(&self) -> &[C::G] {
|
||||
&self.h_bold
|
||||
}
|
||||
|
||||
/// Reduce a set of generators to the quantity necessary to support a certain amount of
|
||||
/// in-circuit multiplications/terms in a Pedersen vector commitment.
|
||||
///
|
||||
/// Returns None if reducing to 0 or if the generators reduced are insufficient to provide this
|
||||
/// many generators.
|
||||
pub fn reduce(&self, generators: usize) -> Option<ProofGenerators<'_, C>> {
|
||||
if generators == 0 {
|
||||
None?;
|
||||
}
|
||||
|
||||
// Round to the nearest power of 2
|
||||
let generators = padded_pow_of_2(generators);
|
||||
if generators > self.g_bold.len() {
|
||||
None?;
|
||||
}
|
||||
|
||||
Some(ProofGenerators {
|
||||
g: &self.g,
|
||||
h: &self.h,
|
||||
|
||||
g_bold: &self.g_bold[.. generators],
|
||||
h_bold: &self.h_bold[.. generators],
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, C: Ciphersuite> ProofGenerators<'a, C> {
|
||||
pub(crate) fn len(&self) -> usize {
|
||||
self.g_bold.len()
|
||||
}
|
||||
|
||||
pub(crate) fn g(&self) -> C::G {
|
||||
*self.g
|
||||
}
|
||||
|
||||
pub(crate) fn h(&self) -> C::G {
|
||||
*self.h
|
||||
}
|
||||
|
||||
pub(crate) fn g_bold(&self, i: usize) -> C::G {
|
||||
self.g_bold[i]
|
||||
}
|
||||
|
||||
pub(crate) fn h_bold(&self, i: usize) -> C::G {
|
||||
self.h_bold[i]
|
||||
}
|
||||
|
||||
pub(crate) fn g_bold_slice(&self) -> &[C::G] {
|
||||
self.g_bold
|
||||
}
|
||||
|
||||
pub(crate) fn h_bold_slice(&self) -> &[C::G] {
|
||||
self.h_bold
|
||||
}
|
||||
}
|
||||
|
||||
/// The opening of a Pedersen commitment.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub struct PedersenCommitment<C: Ciphersuite> {
|
||||
/// The value committed to.
|
||||
pub value: C::F,
|
||||
/// The mask blinding the value committed to.
|
||||
pub mask: C::F,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> PedersenCommitment<C> {
|
||||
/// Commit to this value, yielding the Pedersen commitment.
|
||||
pub fn commit(&self, g: C::G, h: C::G) -> C::G {
|
||||
multiexp(&[(self.value, g), (self.mask, h)])
|
||||
}
|
||||
}
|
||||
|
||||
/// The opening of a Pedersen vector commitment.
|
||||
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub struct PedersenVectorCommitment<C: Ciphersuite> {
|
||||
/// The values committed to across the `g` (bold) generators.
|
||||
pub g_values: ScalarVector<C::F>,
|
||||
/// The values committed to across the `h` (bold) generators.
|
||||
pub h_values: ScalarVector<C::F>,
|
||||
/// The mask blinding the values committed to.
|
||||
pub mask: C::F,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> PedersenVectorCommitment<C> {
|
||||
/// Commit to the vectors of values.
|
||||
///
|
||||
/// This function returns None if the amount of generators is less than the amount of values
|
||||
/// within the relevant vector.
|
||||
pub fn commit(&self, g_bold: &[C::G], h_bold: &[C::G], h: C::G) -> Option<C::G> {
|
||||
if (g_bold.len() < self.g_values.len()) || (h_bold.len() < self.h_values.len()) {
|
||||
None?;
|
||||
};
|
||||
|
||||
let mut terms = vec![(self.mask, h)];
|
||||
for pair in self.g_values.0.iter().cloned().zip(g_bold.iter().cloned()) {
|
||||
terms.push(pair);
|
||||
}
|
||||
for pair in self.h_values.0.iter().cloned().zip(h_bold.iter().cloned()) {
|
||||
terms.push(pair);
|
||||
}
|
||||
let res = multiexp(&terms);
|
||||
terms.zeroize();
|
||||
Some(res)
|
||||
}
|
||||
}
|
||||
265
crypto/evrf/generalized-bulletproofs/src/lincomb.rs
Normal file
265
crypto/evrf/generalized-bulletproofs/src/lincomb.rs
Normal file
@@ -0,0 +1,265 @@
|
||||
use core::ops::{Add, Sub, Mul};
|
||||
|
||||
use zeroize::Zeroize;
|
||||
|
||||
use ciphersuite::group::ff::PrimeField;
|
||||
|
||||
use crate::ScalarVector;
|
||||
|
||||
/// A reference to a variable usable within linear combinations.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
#[allow(non_camel_case_types)]
|
||||
pub enum Variable {
|
||||
/// A variable within the left vector of vectors multiplied against each other.
|
||||
aL(usize),
|
||||
/// A variable within the right vector of vectors multiplied against each other.
|
||||
aR(usize),
|
||||
/// A variable within the output vector of the left vector multiplied by the right vector.
|
||||
aO(usize),
|
||||
/// A variable within a Pedersen vector commitment, committed to with a generator from `g` (bold).
|
||||
CG {
|
||||
/// The commitment being indexed.
|
||||
commitment: usize,
|
||||
/// The index of the variable.
|
||||
index: usize,
|
||||
},
|
||||
/// A variable within a Pedersen vector commitment, committed to with a generator from `h` (bold).
|
||||
CH {
|
||||
/// The commitment being indexed.
|
||||
commitment: usize,
|
||||
/// The index of the variable.
|
||||
index: usize,
|
||||
},
|
||||
/// A variable within a Pedersen commitment.
|
||||
V(usize),
|
||||
}
|
||||
|
||||
// Does a NOP as there shouldn't be anything critical here
|
||||
impl Zeroize for Variable {
|
||||
fn zeroize(&mut self) {}
|
||||
}
|
||||
|
||||
/// A linear combination.
|
||||
///
|
||||
/// Specifically, `WL aL + WR aR + WO aO + WCG C_G + WCH C_H + WV V + c`.
|
||||
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
|
||||
#[must_use]
|
||||
pub struct LinComb<F: PrimeField> {
|
||||
pub(crate) highest_a_index: Option<usize>,
|
||||
pub(crate) highest_c_index: Option<usize>,
|
||||
pub(crate) highest_v_index: Option<usize>,
|
||||
|
||||
// Sparse representation of WL/WR/WO
|
||||
pub(crate) WL: Vec<(usize, F)>,
|
||||
pub(crate) WR: Vec<(usize, F)>,
|
||||
pub(crate) WO: Vec<(usize, F)>,
|
||||
// Sparse representation once within a commitment
|
||||
pub(crate) WCG: Vec<Vec<(usize, F)>>,
|
||||
pub(crate) WCH: Vec<Vec<(usize, F)>>,
|
||||
// Sparse representation of WV
|
||||
pub(crate) WV: Vec<(usize, F)>,
|
||||
pub(crate) c: F,
|
||||
}
|
||||
|
||||
impl<F: PrimeField> From<Variable> for LinComb<F> {
|
||||
fn from(constrainable: Variable) -> LinComb<F> {
|
||||
LinComb::empty().term(F::ONE, constrainable)
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> Add<&LinComb<F>> for LinComb<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn add(mut self, constraint: &Self) -> Self {
|
||||
self.highest_a_index = self.highest_a_index.max(constraint.highest_a_index);
|
||||
self.highest_c_index = self.highest_c_index.max(constraint.highest_c_index);
|
||||
self.highest_v_index = self.highest_v_index.max(constraint.highest_v_index);
|
||||
|
||||
self.WL.extend(&constraint.WL);
|
||||
self.WR.extend(&constraint.WR);
|
||||
self.WO.extend(&constraint.WO);
|
||||
while self.WCG.len() < constraint.WCG.len() {
|
||||
self.WCG.push(vec![]);
|
||||
}
|
||||
while self.WCH.len() < constraint.WCH.len() {
|
||||
self.WCH.push(vec![]);
|
||||
}
|
||||
for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) {
|
||||
sWC.extend(cWC);
|
||||
}
|
||||
for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) {
|
||||
sWC.extend(cWC);
|
||||
}
|
||||
self.WV.extend(&constraint.WV);
|
||||
self.c += constraint.c;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> Sub<&LinComb<F>> for LinComb<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn sub(mut self, constraint: &Self) -> Self {
|
||||
self.highest_a_index = self.highest_a_index.max(constraint.highest_a_index);
|
||||
self.highest_c_index = self.highest_c_index.max(constraint.highest_c_index);
|
||||
self.highest_v_index = self.highest_v_index.max(constraint.highest_v_index);
|
||||
|
||||
self.WL.extend(constraint.WL.iter().map(|(i, weight)| (*i, -*weight)));
|
||||
self.WR.extend(constraint.WR.iter().map(|(i, weight)| (*i, -*weight)));
|
||||
self.WO.extend(constraint.WO.iter().map(|(i, weight)| (*i, -*weight)));
|
||||
while self.WCG.len() < constraint.WCG.len() {
|
||||
self.WCG.push(vec![]);
|
||||
}
|
||||
while self.WCH.len() < constraint.WCH.len() {
|
||||
self.WCH.push(vec![]);
|
||||
}
|
||||
for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) {
|
||||
sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight)));
|
||||
}
|
||||
for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) {
|
||||
sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight)));
|
||||
}
|
||||
self.WV.extend(constraint.WV.iter().map(|(i, weight)| (*i, -*weight)));
|
||||
self.c -= constraint.c;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> Mul<F> for LinComb<F> {
|
||||
type Output = Self;
|
||||
|
||||
fn mul(mut self, scalar: F) -> Self {
|
||||
for (_, weight) in self.WL.iter_mut() {
|
||||
*weight *= scalar;
|
||||
}
|
||||
for (_, weight) in self.WR.iter_mut() {
|
||||
*weight *= scalar;
|
||||
}
|
||||
for (_, weight) in self.WO.iter_mut() {
|
||||
*weight *= scalar;
|
||||
}
|
||||
for WC in self.WCG.iter_mut() {
|
||||
for (_, weight) in WC {
|
||||
*weight *= scalar;
|
||||
}
|
||||
}
|
||||
for WC in self.WCH.iter_mut() {
|
||||
for (_, weight) in WC {
|
||||
*weight *= scalar;
|
||||
}
|
||||
}
|
||||
for (_, weight) in self.WV.iter_mut() {
|
||||
*weight *= scalar;
|
||||
}
|
||||
self.c *= scalar;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> LinComb<F> {
|
||||
/// Create an empty linear combination.
|
||||
pub fn empty() -> Self {
|
||||
Self {
|
||||
highest_a_index: None,
|
||||
highest_c_index: None,
|
||||
highest_v_index: None,
|
||||
WL: vec![],
|
||||
WR: vec![],
|
||||
WO: vec![],
|
||||
WCG: vec![],
|
||||
WCH: vec![],
|
||||
WV: vec![],
|
||||
c: F::ZERO,
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a new instance of a term to this linear combination.
|
||||
pub fn term(mut self, scalar: F, constrainable: Variable) -> Self {
|
||||
match constrainable {
|
||||
Variable::aL(i) => {
|
||||
self.highest_a_index = self.highest_a_index.max(Some(i));
|
||||
self.WL.push((i, scalar))
|
||||
}
|
||||
Variable::aR(i) => {
|
||||
self.highest_a_index = self.highest_a_index.max(Some(i));
|
||||
self.WR.push((i, scalar))
|
||||
}
|
||||
Variable::aO(i) => {
|
||||
self.highest_a_index = self.highest_a_index.max(Some(i));
|
||||
self.WO.push((i, scalar))
|
||||
}
|
||||
Variable::CG { commitment: i, index: j } => {
|
||||
self.highest_c_index = self.highest_c_index.max(Some(i));
|
||||
self.highest_a_index = self.highest_a_index.max(Some(j));
|
||||
while self.WCG.len() <= i {
|
||||
self.WCG.push(vec![]);
|
||||
}
|
||||
self.WCG[i].push((j, scalar))
|
||||
}
|
||||
Variable::CH { commitment: i, index: j } => {
|
||||
self.highest_c_index = self.highest_c_index.max(Some(i));
|
||||
self.highest_a_index = self.highest_a_index.max(Some(j));
|
||||
while self.WCH.len() <= i {
|
||||
self.WCH.push(vec![]);
|
||||
}
|
||||
self.WCH[i].push((j, scalar))
|
||||
}
|
||||
Variable::V(i) => {
|
||||
self.highest_v_index = self.highest_v_index.max(Some(i));
|
||||
self.WV.push((i, scalar));
|
||||
}
|
||||
};
|
||||
self
|
||||
}
|
||||
|
||||
/// Add to the constant c.
|
||||
pub fn constant(mut self, scalar: F) -> Self {
|
||||
self.c += scalar;
|
||||
self
|
||||
}
|
||||
|
||||
/// View the current weights for aL.
|
||||
pub fn WL(&self) -> &[(usize, F)] {
|
||||
&self.WL
|
||||
}
|
||||
|
||||
/// View the current weights for aR.
|
||||
pub fn WR(&self) -> &[(usize, F)] {
|
||||
&self.WR
|
||||
}
|
||||
|
||||
/// View the current weights for aO.
|
||||
pub fn WO(&self) -> &[(usize, F)] {
|
||||
&self.WO
|
||||
}
|
||||
|
||||
/// View the current weights for CG.
|
||||
pub fn WCG(&self) -> &[Vec<(usize, F)>] {
|
||||
&self.WCG
|
||||
}
|
||||
|
||||
/// View the current weights for CH.
|
||||
pub fn WCH(&self) -> &[Vec<(usize, F)>] {
|
||||
&self.WCH
|
||||
}
|
||||
|
||||
/// View the current weights for V.
|
||||
pub fn WV(&self) -> &[(usize, F)] {
|
||||
&self.WV
|
||||
}
|
||||
|
||||
/// View the current constant.
|
||||
pub fn c(&self) -> F {
|
||||
self.c
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn accumulate_vector<F: PrimeField>(
|
||||
accumulator: &mut ScalarVector<F>,
|
||||
values: &[(usize, F)],
|
||||
weight: F,
|
||||
) {
|
||||
for (i, coeff) in values {
|
||||
accumulator[*i] += *coeff * weight;
|
||||
}
|
||||
}
|
||||
121
crypto/evrf/generalized-bulletproofs/src/point_vector.rs
Normal file
121
crypto/evrf/generalized-bulletproofs/src/point_vector.rs
Normal file
@@ -0,0 +1,121 @@
|
||||
use core::ops::{Index, IndexMut};
|
||||
|
||||
use zeroize::Zeroize;
|
||||
|
||||
use ciphersuite::Ciphersuite;
|
||||
|
||||
#[cfg(test)]
|
||||
use multiexp::multiexp;
|
||||
|
||||
use crate::ScalarVector;
|
||||
|
||||
/// A point vector struct with the functionality necessary for Bulletproofs.
|
||||
///
|
||||
/// The math operations for this panic upon any invalid operation, such as if vectors of different
|
||||
/// lengths are added. The full extent of invalidity is not fully defined. Only field access is
|
||||
/// guaranteed to have a safe, public API.
|
||||
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
|
||||
pub struct PointVector<C: Ciphersuite>(pub(crate) Vec<C::G>);
|
||||
|
||||
impl<C: Ciphersuite> Index<usize> for PointVector<C> {
|
||||
type Output = C::G;
|
||||
fn index(&self, index: usize) -> &C::G {
|
||||
&self.0[index]
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> IndexMut<usize> for PointVector<C> {
|
||||
fn index_mut(&mut self, index: usize) -> &mut C::G {
|
||||
&mut self.0[index]
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> PointVector<C> {
|
||||
/*
|
||||
pub(crate) fn add(&self, point: impl AsRef<C::G>) -> Self {
|
||||
let mut res = self.clone();
|
||||
for val in res.0.iter_mut() {
|
||||
*val += point.as_ref();
|
||||
}
|
||||
res
|
||||
}
|
||||
pub(crate) fn sub(&self, point: impl AsRef<C::G>) -> Self {
|
||||
let mut res = self.clone();
|
||||
for val in res.0.iter_mut() {
|
||||
*val -= point.as_ref();
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
pub(crate) fn mul(&self, scalar: impl core::borrow::Borrow<C::F>) -> Self {
|
||||
let mut res = self.clone();
|
||||
for val in res.0.iter_mut() {
|
||||
*val *= scalar.borrow();
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
pub(crate) fn add_vec(&self, vector: &Self) -> Self {
|
||||
debug_assert_eq!(self.len(), vector.len());
|
||||
let mut res = self.clone();
|
||||
for (i, val) in res.0.iter_mut().enumerate() {
|
||||
*val += vector.0[i];
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
pub(crate) fn sub_vec(&self, vector: &Self) -> Self {
|
||||
debug_assert_eq!(self.len(), vector.len());
|
||||
let mut res = self.clone();
|
||||
for (i, val) in res.0.iter_mut().enumerate() {
|
||||
*val -= vector.0[i];
|
||||
}
|
||||
res
|
||||
}
|
||||
*/
|
||||
|
||||
pub(crate) fn mul_vec(&self, vector: &ScalarVector<C::F>) -> Self {
|
||||
debug_assert_eq!(self.len(), vector.len());
|
||||
let mut res = self.clone();
|
||||
for (i, val) in res.0.iter_mut().enumerate() {
|
||||
*val *= vector.0[i];
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub(crate) fn multiexp(&self, vector: &crate::ScalarVector<C::F>) -> C::G {
|
||||
debug_assert_eq!(self.len(), vector.len());
|
||||
let mut res = Vec::with_capacity(self.len());
|
||||
for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) {
|
||||
res.push((scalar, point));
|
||||
}
|
||||
multiexp(&res)
|
||||
}
|
||||
|
||||
/*
|
||||
pub(crate) fn multiexp_vartime(&self, vector: &ScalarVector<C::F>) -> C::G {
|
||||
debug_assert_eq!(self.len(), vector.len());
|
||||
let mut res = Vec::with_capacity(self.len());
|
||||
for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) {
|
||||
res.push((scalar, point));
|
||||
}
|
||||
multiexp_vartime(&res)
|
||||
}
|
||||
|
||||
pub(crate) fn sum(&self) -> C::G {
|
||||
self.0.iter().sum()
|
||||
}
|
||||
*/
|
||||
|
||||
pub(crate) fn len(&self) -> usize {
|
||||
self.0.len()
|
||||
}
|
||||
|
||||
pub(crate) fn split(mut self) -> (Self, Self) {
|
||||
assert!(self.len() > 1);
|
||||
let r = self.0.split_off(self.0.len() / 2);
|
||||
debug_assert_eq!(self.len(), r.len());
|
||||
(self, PointVector(r))
|
||||
}
|
||||
}
|
||||
146
crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs
Normal file
146
crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs
Normal file
@@ -0,0 +1,146 @@
|
||||
use core::ops::{Index, IndexMut, Add, Sub, Mul};
|
||||
|
||||
use zeroize::Zeroize;
|
||||
|
||||
use ciphersuite::group::ff::PrimeField;
|
||||
|
||||
/// A scalar vector struct with the functionality necessary for Bulletproofs.
|
||||
///
|
||||
/// The math operations for this panic upon any invalid operation, such as if vectors of different
|
||||
/// lengths are added. The full extent of invalidity is not fully defined. Only `new`, `len`,
|
||||
/// and field access is guaranteed to have a safe, public API.
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct ScalarVector<F: PrimeField>(pub(crate) Vec<F>);
|
||||
|
||||
impl<F: PrimeField + Zeroize> Zeroize for ScalarVector<F> {
|
||||
fn zeroize(&mut self) {
|
||||
self.0.zeroize()
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> Index<usize> for ScalarVector<F> {
|
||||
type Output = F;
|
||||
fn index(&self, index: usize) -> &F {
|
||||
&self.0[index]
|
||||
}
|
||||
}
|
||||
impl<F: PrimeField> IndexMut<usize> for ScalarVector<F> {
|
||||
fn index_mut(&mut self, index: usize) -> &mut F {
|
||||
&mut self.0[index]
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> Add<F> for ScalarVector<F> {
|
||||
type Output = ScalarVector<F>;
|
||||
fn add(mut self, scalar: F) -> Self {
|
||||
for s in &mut self.0 {
|
||||
*s += scalar;
|
||||
}
|
||||
self
|
||||
}
|
||||
}
|
||||
impl<F: PrimeField> Sub<F> for ScalarVector<F> {
|
||||
type Output = ScalarVector<F>;
|
||||
fn sub(mut self, scalar: F) -> Self {
|
||||
for s in &mut self.0 {
|
||||
*s -= scalar;
|
||||
}
|
||||
self
|
||||
}
|
||||
}
|
||||
impl<F: PrimeField> Mul<F> for ScalarVector<F> {
|
||||
type Output = ScalarVector<F>;
|
||||
fn mul(mut self, scalar: F) -> Self {
|
||||
for s in &mut self.0 {
|
||||
*s *= scalar;
|
||||
}
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> Add<&ScalarVector<F>> for ScalarVector<F> {
|
||||
type Output = ScalarVector<F>;
|
||||
fn add(mut self, other: &ScalarVector<F>) -> Self {
|
||||
assert_eq!(self.len(), other.len());
|
||||
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
|
||||
*s += o;
|
||||
}
|
||||
self
|
||||
}
|
||||
}
|
||||
impl<F: PrimeField> Sub<&ScalarVector<F>> for ScalarVector<F> {
|
||||
type Output = ScalarVector<F>;
|
||||
fn sub(mut self, other: &ScalarVector<F>) -> Self {
|
||||
assert_eq!(self.len(), other.len());
|
||||
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
|
||||
*s -= o;
|
||||
}
|
||||
self
|
||||
}
|
||||
}
|
||||
impl<F: PrimeField> Mul<&ScalarVector<F>> for ScalarVector<F> {
|
||||
type Output = ScalarVector<F>;
|
||||
fn mul(mut self, other: &ScalarVector<F>) -> Self {
|
||||
assert_eq!(self.len(), other.len());
|
||||
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
|
||||
*s *= o;
|
||||
}
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> ScalarVector<F> {
|
||||
/// Create a new scalar vector, initialized with `len` zero scalars.
|
||||
pub fn new(len: usize) -> Self {
|
||||
ScalarVector(vec![F::ZERO; len])
|
||||
}
|
||||
|
||||
pub(crate) fn powers(x: F, len: usize) -> Self {
|
||||
assert!(len != 0);
|
||||
|
||||
let mut res = Vec::with_capacity(len);
|
||||
res.push(F::ONE);
|
||||
res.push(x);
|
||||
for i in 2 .. len {
|
||||
res.push(res[i - 1] * x);
|
||||
}
|
||||
res.truncate(len);
|
||||
ScalarVector(res)
|
||||
}
|
||||
|
||||
/// The length of this scalar vector.
|
||||
#[allow(clippy::len_without_is_empty)]
|
||||
pub fn len(&self) -> usize {
|
||||
self.0.len()
|
||||
}
|
||||
|
||||
/*
|
||||
pub(crate) fn sum(mut self) -> F {
|
||||
self.0.drain(..).sum()
|
||||
}
|
||||
*/
|
||||
|
||||
pub(crate) fn inner_product<'a, V: Iterator<Item = &'a F>>(&self, vector: V) -> F {
|
||||
let mut count = 0;
|
||||
let mut res = F::ZERO;
|
||||
for (a, b) in self.0.iter().zip(vector) {
|
||||
res += *a * b;
|
||||
count += 1;
|
||||
}
|
||||
debug_assert_eq!(self.len(), count);
|
||||
res
|
||||
}
|
||||
|
||||
pub(crate) fn split(mut self) -> (Self, Self) {
|
||||
assert!(self.len() > 1);
|
||||
let r = self.0.split_off(self.0.len() / 2);
|
||||
debug_assert_eq!(self.len(), r.len());
|
||||
(self, ScalarVector(r))
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: PrimeField> From<Vec<F>> for ScalarVector<F> {
|
||||
fn from(vec: Vec<F>) -> Self {
|
||||
Self(vec)
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,250 @@
|
||||
use rand_core::{RngCore, OsRng};
|
||||
|
||||
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
|
||||
|
||||
use crate::{
|
||||
ScalarVector, PedersenCommitment, PedersenVectorCommitment,
|
||||
transcript::*,
|
||||
arithmetic_circuit_proof::{
|
||||
Variable, LinComb, ArithmeticCircuitStatement, ArithmeticCircuitWitness,
|
||||
},
|
||||
tests::generators,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn test_zero_arithmetic_circuit() {
|
||||
let generators = generators(1);
|
||||
|
||||
let value = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let gamma = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let commitment = (generators.g() * value) + (generators.h() * gamma);
|
||||
let V = vec![commitment];
|
||||
|
||||
let aL = ScalarVector::<<Ristretto as Ciphersuite>::F>(vec![<Ristretto as Ciphersuite>::F::ZERO]);
|
||||
let aR = aL.clone();
|
||||
|
||||
let mut transcript = Transcript::new([0; 32]);
|
||||
let commitments = transcript.write_commitments(vec![], V);
|
||||
let statement = ArithmeticCircuitStatement::<Ristretto>::new(
|
||||
generators.reduce(1).unwrap(),
|
||||
vec![],
|
||||
commitments.clone(),
|
||||
)
|
||||
.unwrap();
|
||||
let witness = ArithmeticCircuitWitness::<Ristretto>::new(
|
||||
aL,
|
||||
aR,
|
||||
vec![],
|
||||
vec![PedersenCommitment { value, mask: gamma }],
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let proof = {
|
||||
statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap();
|
||||
transcript.complete()
|
||||
};
|
||||
let mut verifier = generators.batch_verifier();
|
||||
|
||||
let mut transcript = VerifierTranscript::new([0; 32], &proof);
|
||||
let verifier_commmitments = transcript.read_commitments(0, 1);
|
||||
assert_eq!(commitments, verifier_commmitments.unwrap());
|
||||
statement.verify(&mut OsRng, &mut verifier, &mut transcript).unwrap();
|
||||
assert!(generators.verify(verifier));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_vector_commitment_arithmetic_circuit() {
|
||||
let generators = generators(2);
|
||||
let reduced = generators.reduce(2).unwrap();
|
||||
|
||||
let v1 = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let v2 = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let v3 = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let v4 = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let gamma = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
let commitment = (reduced.g_bold(0) * v1) +
|
||||
(reduced.g_bold(1) * v2) +
|
||||
(reduced.h_bold(0) * v3) +
|
||||
(reduced.h_bold(1) * v4) +
|
||||
(generators.h() * gamma);
|
||||
let V = vec![];
|
||||
let C = vec![commitment];
|
||||
|
||||
let zero_vec =
|
||||
|| ScalarVector::<<Ristretto as Ciphersuite>::F>(vec![<Ristretto as Ciphersuite>::F::ZERO]);
|
||||
|
||||
let aL = zero_vec();
|
||||
let aR = zero_vec();
|
||||
|
||||
let mut transcript = Transcript::new([0; 32]);
|
||||
let commitments = transcript.write_commitments(C, V);
|
||||
let statement = ArithmeticCircuitStatement::<Ristretto>::new(
|
||||
reduced,
|
||||
vec![LinComb::empty()
|
||||
.term(<Ristretto as Ciphersuite>::F::ONE, Variable::CG { commitment: 0, index: 0 })
|
||||
.term(<Ristretto as Ciphersuite>::F::from(2u64), Variable::CG { commitment: 0, index: 1 })
|
||||
.term(<Ristretto as Ciphersuite>::F::from(3u64), Variable::CH { commitment: 0, index: 0 })
|
||||
.term(<Ristretto as Ciphersuite>::F::from(4u64), Variable::CH { commitment: 0, index: 1 })
|
||||
.constant(-(v1 + (v2 + v2) + (v3 + v3 + v3) + (v4 + v4 + v4 + v4)))],
|
||||
commitments.clone(),
|
||||
)
|
||||
.unwrap();
|
||||
let witness = ArithmeticCircuitWitness::<Ristretto>::new(
|
||||
aL,
|
||||
aR,
|
||||
vec![PedersenVectorCommitment {
|
||||
g_values: ScalarVector(vec![v1, v2]),
|
||||
h_values: ScalarVector(vec![v3, v4]),
|
||||
mask: gamma,
|
||||
}],
|
||||
vec![],
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let proof = {
|
||||
statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap();
|
||||
transcript.complete()
|
||||
};
|
||||
let mut verifier = generators.batch_verifier();
|
||||
|
||||
let mut transcript = VerifierTranscript::new([0; 32], &proof);
|
||||
let verifier_commmitments = transcript.read_commitments(1, 0);
|
||||
assert_eq!(commitments, verifier_commmitments.unwrap());
|
||||
statement.verify(&mut OsRng, &mut verifier, &mut transcript).unwrap();
|
||||
assert!(generators.verify(verifier));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fuzz_test_arithmetic_circuit() {
|
||||
let generators = generators(32);
|
||||
|
||||
for i in 0 .. 100 {
|
||||
dbg!(i);
|
||||
|
||||
// Create aL, aR, aO
|
||||
let mut aL = ScalarVector(vec![]);
|
||||
let mut aR = ScalarVector(vec![]);
|
||||
while aL.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() {
|
||||
aL.0.push(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
|
||||
}
|
||||
while aR.len() < aL.len() {
|
||||
aR.0.push(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
|
||||
}
|
||||
let aO = aL.clone() * &aR;
|
||||
|
||||
// Create C
|
||||
let mut C = vec![];
|
||||
while C.len() < (OsRng.next_u64() % 16).try_into().unwrap() {
|
||||
let mut g_values = ScalarVector(vec![]);
|
||||
while g_values.0.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() {
|
||||
g_values.0.push(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
|
||||
}
|
||||
let mut h_values = ScalarVector(vec![]);
|
||||
while h_values.0.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() {
|
||||
h_values.0.push(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
|
||||
}
|
||||
C.push(PedersenVectorCommitment {
|
||||
g_values,
|
||||
h_values,
|
||||
mask: <Ristretto as Ciphersuite>::F::random(&mut OsRng),
|
||||
});
|
||||
}
|
||||
|
||||
// Create V
|
||||
let mut V = vec![];
|
||||
while V.len() < (OsRng.next_u64() % 4).try_into().unwrap() {
|
||||
V.push(PedersenCommitment {
|
||||
value: <Ristretto as Ciphersuite>::F::random(&mut OsRng),
|
||||
mask: <Ristretto as Ciphersuite>::F::random(&mut OsRng),
|
||||
});
|
||||
}
|
||||
|
||||
// Generate random constraints
|
||||
let mut constraints = vec![];
|
||||
for _ in 0 .. (OsRng.next_u64() % 8).try_into().unwrap() {
|
||||
let mut eval = <Ristretto as Ciphersuite>::F::ZERO;
|
||||
let mut constraint = LinComb::empty();
|
||||
|
||||
for _ in 0 .. (OsRng.next_u64() % 4) {
|
||||
let index = usize::try_from(OsRng.next_u64()).unwrap() % aL.len();
|
||||
let weight = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
constraint = constraint.term(weight, Variable::aL(index));
|
||||
eval += weight * aL[index];
|
||||
}
|
||||
|
||||
for _ in 0 .. (OsRng.next_u64() % 4) {
|
||||
let index = usize::try_from(OsRng.next_u64()).unwrap() % aR.len();
|
||||
let weight = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
constraint = constraint.term(weight, Variable::aR(index));
|
||||
eval += weight * aR[index];
|
||||
}
|
||||
|
||||
for _ in 0 .. (OsRng.next_u64() % 4) {
|
||||
let index = usize::try_from(OsRng.next_u64()).unwrap() % aO.len();
|
||||
let weight = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
constraint = constraint.term(weight, Variable::aO(index));
|
||||
eval += weight * aO[index];
|
||||
}
|
||||
|
||||
for (commitment, C) in C.iter().enumerate() {
|
||||
for _ in 0 .. (OsRng.next_u64() % 4) {
|
||||
let index = usize::try_from(OsRng.next_u64()).unwrap() % C.g_values.len();
|
||||
let weight = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
constraint = constraint.term(weight, Variable::CG { commitment, index });
|
||||
eval += weight * C.g_values[index];
|
||||
}
|
||||
|
||||
for _ in 0 .. (OsRng.next_u64() % 4) {
|
||||
let index = usize::try_from(OsRng.next_u64()).unwrap() % C.h_values.len();
|
||||
let weight = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
constraint = constraint.term(weight, Variable::CH { commitment, index });
|
||||
eval += weight * C.h_values[index];
|
||||
}
|
||||
}
|
||||
|
||||
if !V.is_empty() {
|
||||
for _ in 0 .. (OsRng.next_u64() % 4) {
|
||||
let index = usize::try_from(OsRng.next_u64()).unwrap() % V.len();
|
||||
let weight = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
constraint = constraint.term(weight, Variable::V(index));
|
||||
eval += weight * V[index].value;
|
||||
}
|
||||
}
|
||||
|
||||
constraint = constraint.constant(-eval);
|
||||
|
||||
constraints.push(constraint);
|
||||
}
|
||||
|
||||
let mut transcript = Transcript::new([0; 32]);
|
||||
let commitments = transcript.write_commitments(
|
||||
C.iter()
|
||||
.map(|C| {
|
||||
C.commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h()).unwrap()
|
||||
})
|
||||
.collect(),
|
||||
V.iter().map(|V| V.commit(generators.g(), generators.h())).collect(),
|
||||
);
|
||||
|
||||
let statement = ArithmeticCircuitStatement::<Ristretto>::new(
|
||||
generators.reduce(16).unwrap(),
|
||||
constraints,
|
||||
commitments.clone(),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let witness = ArithmeticCircuitWitness::<Ristretto>::new(aL, aR, C.clone(), V.clone()).unwrap();
|
||||
|
||||
let proof = {
|
||||
statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap();
|
||||
transcript.complete()
|
||||
};
|
||||
let mut verifier = generators.batch_verifier();
|
||||
|
||||
let mut transcript = VerifierTranscript::new([0; 32], &proof);
|
||||
let verifier_commmitments = transcript.read_commitments(C.len(), V.len());
|
||||
assert_eq!(commitments, verifier_commmitments.unwrap());
|
||||
statement.verify(&mut OsRng, &mut verifier, &mut transcript).unwrap();
|
||||
assert!(generators.verify(verifier));
|
||||
}
|
||||
}
|
||||
113
crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs
Normal file
113
crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs
Normal file
@@ -0,0 +1,113 @@
|
||||
// The inner product relation is P = sum(g_bold * a, h_bold * b, g * (a * b))
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use ciphersuite::{
|
||||
group::{ff::Field, Group},
|
||||
Ciphersuite, Ristretto,
|
||||
};
|
||||
|
||||
use crate::{
|
||||
ScalarVector, PointVector,
|
||||
transcript::*,
|
||||
inner_product::{P, IpStatement, IpWitness},
|
||||
tests::generators,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn test_zero_inner_product() {
|
||||
let P = <Ristretto as Ciphersuite>::G::identity();
|
||||
|
||||
let generators = generators::<Ristretto>(1);
|
||||
let reduced = generators.reduce(1).unwrap();
|
||||
let witness = IpWitness::<Ristretto>::new(
|
||||
ScalarVector::<<Ristretto as Ciphersuite>::F>::new(1),
|
||||
ScalarVector::<<Ristretto as Ciphersuite>::F>::new(1),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let proof = {
|
||||
let mut transcript = Transcript::new([0; 32]);
|
||||
IpStatement::<Ristretto>::new(
|
||||
reduced,
|
||||
ScalarVector(vec![<Ristretto as Ciphersuite>::F::ONE; 1]),
|
||||
<Ristretto as Ciphersuite>::F::ONE,
|
||||
P::Prover(P),
|
||||
)
|
||||
.unwrap()
|
||||
.clone()
|
||||
.prove(&mut transcript, witness)
|
||||
.unwrap();
|
||||
transcript.complete()
|
||||
};
|
||||
|
||||
let mut verifier = generators.batch_verifier();
|
||||
IpStatement::<Ristretto>::new(
|
||||
reduced,
|
||||
ScalarVector(vec![<Ristretto as Ciphersuite>::F::ONE; 1]),
|
||||
<Ristretto as Ciphersuite>::F::ONE,
|
||||
P::Verifier { verifier_weight: <Ristretto as Ciphersuite>::F::ONE },
|
||||
)
|
||||
.unwrap()
|
||||
.verify(&mut verifier, &mut VerifierTranscript::new([0; 32], &proof))
|
||||
.unwrap();
|
||||
assert!(generators.verify(verifier));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_inner_product() {
|
||||
// P = sum(g_bold * a, h_bold * b)
|
||||
let generators = generators::<Ristretto>(32);
|
||||
let mut verifier = generators.batch_verifier();
|
||||
for i in [1, 2, 4, 8, 16, 32] {
|
||||
let generators = generators.reduce(i).unwrap();
|
||||
let g = generators.g();
|
||||
assert_eq!(generators.len(), i);
|
||||
let mut g_bold = vec![];
|
||||
let mut h_bold = vec![];
|
||||
for i in 0 .. i {
|
||||
g_bold.push(generators.g_bold(i));
|
||||
h_bold.push(generators.h_bold(i));
|
||||
}
|
||||
let g_bold = PointVector::<Ristretto>(g_bold);
|
||||
let h_bold = PointVector::<Ristretto>(h_bold);
|
||||
|
||||
let mut a = ScalarVector::<<Ristretto as Ciphersuite>::F>::new(i);
|
||||
let mut b = ScalarVector::<<Ristretto as Ciphersuite>::F>::new(i);
|
||||
|
||||
for i in 0 .. i {
|
||||
a[i] = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
b[i] = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
||||
}
|
||||
|
||||
let P = g_bold.multiexp(&a) + h_bold.multiexp(&b) + (g * a.inner_product(b.0.iter()));
|
||||
|
||||
let witness = IpWitness::<Ristretto>::new(a, b).unwrap();
|
||||
|
||||
let proof = {
|
||||
let mut transcript = Transcript::new([0; 32]);
|
||||
IpStatement::<Ristretto>::new(
|
||||
generators,
|
||||
ScalarVector(vec![<Ristretto as Ciphersuite>::F::ONE; i]),
|
||||
<Ristretto as Ciphersuite>::F::ONE,
|
||||
P::Prover(P),
|
||||
)
|
||||
.unwrap()
|
||||
.prove(&mut transcript, witness)
|
||||
.unwrap();
|
||||
transcript.complete()
|
||||
};
|
||||
|
||||
verifier.additional.push((<Ristretto as Ciphersuite>::F::ONE, P));
|
||||
IpStatement::<Ristretto>::new(
|
||||
generators,
|
||||
ScalarVector(vec![<Ristretto as Ciphersuite>::F::ONE; i]),
|
||||
<Ristretto as Ciphersuite>::F::ONE,
|
||||
P::Verifier { verifier_weight: <Ristretto as Ciphersuite>::F::ONE },
|
||||
)
|
||||
.unwrap()
|
||||
.verify(&mut verifier, &mut VerifierTranscript::new([0; 32], &proof))
|
||||
.unwrap();
|
||||
}
|
||||
assert!(generators.verify(verifier));
|
||||
}
|
||||
27
crypto/evrf/generalized-bulletproofs/src/tests/mod.rs
Normal file
27
crypto/evrf/generalized-bulletproofs/src/tests/mod.rs
Normal file
@@ -0,0 +1,27 @@
|
||||
use rand_core::OsRng;
|
||||
|
||||
use ciphersuite::{group::Group, Ciphersuite};
|
||||
|
||||
use crate::{Generators, padded_pow_of_2};
|
||||
|
||||
#[cfg(test)]
|
||||
mod inner_product;
|
||||
|
||||
#[cfg(test)]
|
||||
mod arithmetic_circuit_proof;
|
||||
|
||||
/// Generate a set of generators for testing purposes.
|
||||
///
|
||||
/// This should not be considered secure.
|
||||
pub fn generators<C: Ciphersuite>(n: usize) -> Generators<C> {
|
||||
assert_eq!(padded_pow_of_2(n), n, "amount of generators wasn't a power of 2");
|
||||
|
||||
let gens = || {
|
||||
let mut res = Vec::with_capacity(n);
|
||||
for _ in 0 .. n {
|
||||
res.push(C::G::random(&mut OsRng));
|
||||
}
|
||||
res
|
||||
};
|
||||
Generators::new(C::G::random(&mut OsRng), C::G::random(&mut OsRng), gens(), gens()).unwrap()
|
||||
}
|
||||
188
crypto/evrf/generalized-bulletproofs/src/transcript.rs
Normal file
188
crypto/evrf/generalized-bulletproofs/src/transcript.rs
Normal file
@@ -0,0 +1,188 @@
|
||||
use std::io;
|
||||
|
||||
use blake2::{Digest, Blake2b512};
|
||||
|
||||
use ciphersuite::{
|
||||
group::{ff::PrimeField, GroupEncoding},
|
||||
Ciphersuite,
|
||||
};
|
||||
|
||||
use crate::PointVector;
|
||||
|
||||
const SCALAR: u8 = 0;
|
||||
const POINT: u8 = 1;
|
||||
const CHALLENGE: u8 = 2;
|
||||
|
||||
fn challenge<F: PrimeField>(digest: &mut Blake2b512) -> F {
|
||||
// Panic if this is such a wide field, we won't successfully perform a reduction into an unbiased
|
||||
// scalar
|
||||
debug_assert!((F::NUM_BITS + 128) < 512);
|
||||
|
||||
digest.update([CHALLENGE]);
|
||||
let chl = digest.clone().finalize();
|
||||
|
||||
let mut res = F::ZERO;
|
||||
for (i, mut byte) in chl.iter().cloned().enumerate() {
|
||||
for j in 0 .. 8 {
|
||||
let lsb = byte & 1;
|
||||
let mut bit = F::from(u64::from(lsb));
|
||||
for _ in 0 .. ((i * 8) + j) {
|
||||
bit = bit.double();
|
||||
}
|
||||
res += bit;
|
||||
|
||||
byte >>= 1;
|
||||
}
|
||||
}
|
||||
|
||||
// Negligible probability
|
||||
if bool::from(res.is_zero()) {
|
||||
panic!("zero challenge");
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
/// Commitments written to/read from a transcript.
|
||||
// We use a dedicated type for this to coerce the caller into transcripting the commitments as
|
||||
// expected.
|
||||
#[cfg_attr(test, derive(Clone, PartialEq, Debug))]
|
||||
pub struct Commitments<C: Ciphersuite> {
|
||||
pub(crate) C: PointVector<C>,
|
||||
pub(crate) V: PointVector<C>,
|
||||
}
|
||||
|
||||
impl<C: Ciphersuite> Commitments<C> {
|
||||
/// The vector commitments.
|
||||
pub fn C(&self) -> &[C::G] {
|
||||
&self.C.0
|
||||
}
|
||||
/// The non-vector commitments.
|
||||
pub fn V(&self) -> &[C::G] {
|
||||
&self.V.0
|
||||
}
|
||||
}
|
||||
|
||||
/// A transcript for proving proofs.
|
||||
pub struct Transcript {
|
||||
digest: Blake2b512,
|
||||
transcript: Vec<u8>,
|
||||
}
|
||||
|
||||
/*
|
||||
We define our proofs as Vec<u8> and derive our transcripts from the values we deserialize from
|
||||
them. This format assumes the order of the values read, their size, and their quantity are
|
||||
constant to the context.
|
||||
*/
|
||||
impl Transcript {
|
||||
/// Create a new transcript off some context.
|
||||
pub fn new(context: [u8; 32]) -> Self {
|
||||
let mut digest = Blake2b512::new();
|
||||
digest.update(context);
|
||||
Self { digest, transcript: Vec::with_capacity(1024) }
|
||||
}
|
||||
|
||||
/// Push a scalar onto the transcript.
|
||||
pub fn push_scalar(&mut self, scalar: impl PrimeField) {
|
||||
self.digest.update([SCALAR]);
|
||||
let bytes = scalar.to_repr();
|
||||
self.digest.update(bytes);
|
||||
self.transcript.extend(bytes.as_ref());
|
||||
}
|
||||
|
||||
/// Push a point onto the transcript.
|
||||
pub fn push_point(&mut self, point: impl GroupEncoding) {
|
||||
self.digest.update([POINT]);
|
||||
let bytes = point.to_bytes();
|
||||
self.digest.update(bytes);
|
||||
self.transcript.extend(bytes.as_ref());
|
||||
}
|
||||
|
||||
/// Write the Pedersen (vector) commitments to this transcript.
|
||||
pub fn write_commitments<C: Ciphersuite>(
|
||||
&mut self,
|
||||
C: Vec<C::G>,
|
||||
V: Vec<C::G>,
|
||||
) -> Commitments<C> {
|
||||
for C in &C {
|
||||
self.push_point(*C);
|
||||
}
|
||||
for V in &V {
|
||||
self.push_point(*V);
|
||||
}
|
||||
Commitments { C: PointVector(C), V: PointVector(V) }
|
||||
}
|
||||
|
||||
/// Sample a challenge.
|
||||
pub fn challenge<F: PrimeField>(&mut self) -> F {
|
||||
challenge(&mut self.digest)
|
||||
}
|
||||
|
||||
/// Complete a transcript, yielding the fully serialized proof.
|
||||
pub fn complete(self) -> Vec<u8> {
|
||||
self.transcript
|
||||
}
|
||||
}
|
||||
|
||||
/// A transcript for verifying proofs.
|
||||
pub struct VerifierTranscript<'a> {
|
||||
digest: Blake2b512,
|
||||
transcript: &'a [u8],
|
||||
}
|
||||
|
||||
impl<'a> VerifierTranscript<'a> {
|
||||
/// Create a new transcript to verify a proof with.
|
||||
pub fn new(context: [u8; 32], proof: &'a [u8]) -> Self {
|
||||
let mut digest = Blake2b512::new();
|
||||
digest.update(context);
|
||||
Self { digest, transcript: proof }
|
||||
}
|
||||
|
||||
/// Read a scalar from the transcript.
|
||||
pub fn read_scalar<C: Ciphersuite>(&mut self) -> io::Result<C::F> {
|
||||
let scalar = C::read_F(&mut self.transcript)?;
|
||||
self.digest.update([SCALAR]);
|
||||
let bytes = scalar.to_repr();
|
||||
self.digest.update(bytes);
|
||||
Ok(scalar)
|
||||
}
|
||||
|
||||
/// Read a point from the transcript.
|
||||
pub fn read_point<C: Ciphersuite>(&mut self) -> io::Result<C::G> {
|
||||
let point = C::read_G(&mut self.transcript)?;
|
||||
self.digest.update([POINT]);
|
||||
let bytes = point.to_bytes();
|
||||
self.digest.update(bytes);
|
||||
Ok(point)
|
||||
}
|
||||
|
||||
/// Read the Pedersen (Vector) Commitments from the transcript.
|
||||
///
|
||||
/// The lengths of the vectors are not transcripted.
|
||||
#[allow(clippy::type_complexity)]
|
||||
pub fn read_commitments<C: Ciphersuite>(
|
||||
&mut self,
|
||||
C: usize,
|
||||
V: usize,
|
||||
) -> io::Result<Commitments<C>> {
|
||||
let mut C_vec = Vec::with_capacity(C);
|
||||
for _ in 0 .. C {
|
||||
C_vec.push(self.read_point::<C>()?);
|
||||
}
|
||||
let mut V_vec = Vec::with_capacity(V);
|
||||
for _ in 0 .. V {
|
||||
V_vec.push(self.read_point::<C>()?);
|
||||
}
|
||||
Ok(Commitments { C: PointVector(C_vec), V: PointVector(V_vec) })
|
||||
}
|
||||
|
||||
/// Sample a challenge.
|
||||
pub fn challenge<F: PrimeField>(&mut self) -> F {
|
||||
challenge(&mut self.digest)
|
||||
}
|
||||
|
||||
/// Complete the transcript, returning the advanced slice.
|
||||
pub fn complete(self) -> &'a [u8] {
|
||||
self.transcript
|
||||
}
|
||||
}
|
||||
39
crypto/evrf/secq256k1/Cargo.toml
Normal file
39
crypto/evrf/secq256k1/Cargo.toml
Normal file
@@ -0,0 +1,39 @@
|
||||
[package]
|
||||
name = "secq256k1"
|
||||
version = "0.1.0"
|
||||
description = "An implementation of the curve secp256k1 cycles with"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/secq256k1"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
keywords = ["secp256k1", "secq256k1", "group"]
|
||||
edition = "2021"
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
rustversion = "1"
|
||||
hex-literal = { version = "0.4", default-features = false }
|
||||
|
||||
rand_core = { version = "0.6", default-features = false, features = ["std"] }
|
||||
|
||||
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
|
||||
subtle = { version = "^2.4", default-features = false, features = ["std"] }
|
||||
|
||||
generic-array = { version = "0.14", default-features = false }
|
||||
crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] }
|
||||
|
||||
k256 = { version = "0.13", default-features = false, features = ["arithmetic"] }
|
||||
|
||||
blake2 = { version = "0.10", default-features = false, features = ["std"] }
|
||||
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
|
||||
ec-divisors = { path = "../divisors" }
|
||||
generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" }
|
||||
|
||||
[dev-dependencies]
|
||||
hex = "0.4"
|
||||
|
||||
rand_core = { version = "0.6", features = ["std"] }
|
||||
|
||||
ff-group-tests = { path = "../../ff-group-tests" }
|
||||
21
crypto/evrf/secq256k1/LICENSE
Normal file
21
crypto/evrf/secq256k1/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022-2024 Luke Parker
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
5
crypto/evrf/secq256k1/README.md
Normal file
5
crypto/evrf/secq256k1/README.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# secq256k1
|
||||
|
||||
An implementation of the curve secp256k1 cycles with.
|
||||
|
||||
Scalars and field elements are encoded in their big-endian formats.
|
||||
295
crypto/evrf/secq256k1/src/backend.rs
Normal file
295
crypto/evrf/secq256k1/src/backend.rs
Normal file
@@ -0,0 +1,295 @@
|
||||
use zeroize::Zeroize;
|
||||
|
||||
// Use black_box when possible
|
||||
#[rustversion::since(1.66)]
|
||||
use core::hint::black_box;
|
||||
#[rustversion::before(1.66)]
|
||||
fn black_box<T>(val: T) -> T {
|
||||
val
|
||||
}
|
||||
|
||||
pub(crate) fn u8_from_bool(bit_ref: &mut bool) -> u8 {
|
||||
let bit_ref = black_box(bit_ref);
|
||||
|
||||
let mut bit = black_box(*bit_ref);
|
||||
let res = black_box(bit as u8);
|
||||
bit.zeroize();
|
||||
debug_assert!((res | 1) == 1);
|
||||
|
||||
bit_ref.zeroize();
|
||||
res
|
||||
}
|
||||
|
||||
macro_rules! math_op {
|
||||
(
|
||||
$Value: ident,
|
||||
$Other: ident,
|
||||
$Op: ident,
|
||||
$op_fn: ident,
|
||||
$Assign: ident,
|
||||
$assign_fn: ident,
|
||||
$function: expr
|
||||
) => {
|
||||
impl $Op<$Other> for $Value {
|
||||
type Output = $Value;
|
||||
fn $op_fn(self, other: $Other) -> Self::Output {
|
||||
Self($function(self.0, other.0))
|
||||
}
|
||||
}
|
||||
impl $Assign<$Other> for $Value {
|
||||
fn $assign_fn(&mut self, other: $Other) {
|
||||
self.0 = $function(self.0, other.0);
|
||||
}
|
||||
}
|
||||
impl<'a> $Op<&'a $Other> for $Value {
|
||||
type Output = $Value;
|
||||
fn $op_fn(self, other: &'a $Other) -> Self::Output {
|
||||
Self($function(self.0, other.0))
|
||||
}
|
||||
}
|
||||
impl<'a> $Assign<&'a $Other> for $Value {
|
||||
fn $assign_fn(&mut self, other: &'a $Other) {
|
||||
self.0 = $function(self.0, other.0);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
macro_rules! from_wrapper {
|
||||
($wrapper: ident, $inner: ident, $uint: ident) => {
|
||||
impl From<$uint> for $wrapper {
|
||||
fn from(a: $uint) -> $wrapper {
|
||||
Self(Residue::new(&$inner::from(a)))
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
macro_rules! field {
|
||||
(
|
||||
$FieldName: ident,
|
||||
$ResidueType: ident,
|
||||
|
||||
$MODULUS_STR: ident,
|
||||
$MODULUS: ident,
|
||||
$WIDE_MODULUS: ident,
|
||||
|
||||
$NUM_BITS: literal,
|
||||
$MULTIPLICATIVE_GENERATOR: literal,
|
||||
$S: literal,
|
||||
$ROOT_OF_UNITY: literal,
|
||||
$DELTA: literal,
|
||||
) => {
|
||||
use core::{
|
||||
ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign},
|
||||
iter::{Sum, Product},
|
||||
};
|
||||
|
||||
use subtle::{Choice, CtOption, ConstantTimeEq, ConstantTimeLess, ConditionallySelectable};
|
||||
use rand_core::RngCore;
|
||||
|
||||
use crypto_bigint::{Integer, NonZero, Encoding, impl_modulus};
|
||||
|
||||
use ciphersuite::group::ff::{
|
||||
Field, PrimeField, FieldBits, PrimeFieldBits, helpers::sqrt_ratio_generic,
|
||||
};
|
||||
|
||||
use $crate::backend::u8_from_bool;
|
||||
|
||||
fn reduce(x: U512) -> U256 {
|
||||
U256::from_le_slice(&x.rem(&NonZero::new($WIDE_MODULUS).unwrap()).to_le_bytes()[.. 32])
|
||||
}
|
||||
|
||||
impl ConstantTimeEq for $FieldName {
|
||||
fn ct_eq(&self, other: &Self) -> Choice {
|
||||
self.0.ct_eq(&other.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl ConditionallySelectable for $FieldName {
|
||||
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
|
||||
$FieldName(Residue::conditional_select(&a.0, &b.0, choice))
|
||||
}
|
||||
}
|
||||
|
||||
math_op!($FieldName, $FieldName, Add, add, AddAssign, add_assign, |x: $ResidueType, y| x
|
||||
.add(&y));
|
||||
math_op!($FieldName, $FieldName, Sub, sub, SubAssign, sub_assign, |x: $ResidueType, y| x
|
||||
.sub(&y));
|
||||
math_op!($FieldName, $FieldName, Mul, mul, MulAssign, mul_assign, |x: $ResidueType, y| x
|
||||
.mul(&y));
|
||||
|
||||
from_wrapper!($FieldName, U256, u8);
|
||||
from_wrapper!($FieldName, U256, u16);
|
||||
from_wrapper!($FieldName, U256, u32);
|
||||
from_wrapper!($FieldName, U256, u64);
|
||||
from_wrapper!($FieldName, U256, u128);
|
||||
|
||||
impl Neg for $FieldName {
|
||||
type Output = $FieldName;
|
||||
fn neg(self) -> $FieldName {
|
||||
Self(self.0.neg())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Neg for &'a $FieldName {
|
||||
type Output = $FieldName;
|
||||
fn neg(self) -> Self::Output {
|
||||
(*self).neg()
|
||||
}
|
||||
}
|
||||
|
||||
impl $FieldName {
|
||||
/// Perform an exponentation.
|
||||
pub fn pow(&self, other: $FieldName) -> $FieldName {
|
||||
let mut table = [Self(Residue::ONE); 16];
|
||||
table[1] = *self;
|
||||
for i in 2 .. 16 {
|
||||
table[i] = table[i - 1] * self;
|
||||
}
|
||||
|
||||
let mut res = Self(Residue::ONE);
|
||||
let mut bits = 0;
|
||||
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
|
||||
bits <<= 1;
|
||||
let mut bit = u8_from_bool(bit.deref_mut());
|
||||
bits |= bit;
|
||||
bit.zeroize();
|
||||
|
||||
if ((i + 1) % 4) == 0 {
|
||||
if i != 3 {
|
||||
for _ in 0 .. 4 {
|
||||
res *= res;
|
||||
}
|
||||
}
|
||||
|
||||
let mut factor = table[0];
|
||||
for (j, candidate) in table[1 ..].iter().enumerate() {
|
||||
let j = j + 1;
|
||||
factor = Self::conditional_select(&factor, &candidate, usize::from(bits).ct_eq(&j));
|
||||
}
|
||||
res *= factor;
|
||||
bits = 0;
|
||||
}
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl Field for $FieldName {
|
||||
const ZERO: Self = Self(Residue::ZERO);
|
||||
const ONE: Self = Self(Residue::ONE);
|
||||
|
||||
fn random(mut rng: impl RngCore) -> Self {
|
||||
let mut bytes = [0; 64];
|
||||
rng.fill_bytes(&mut bytes);
|
||||
$FieldName(Residue::new(&reduce(U512::from_be_slice(bytes.as_ref()))))
|
||||
}
|
||||
|
||||
fn square(&self) -> Self {
|
||||
Self(self.0.square())
|
||||
}
|
||||
fn double(&self) -> Self {
|
||||
*self + self
|
||||
}
|
||||
|
||||
fn invert(&self) -> CtOption<Self> {
|
||||
let res = self.0.invert();
|
||||
CtOption::new(Self(res.0), res.1.into())
|
||||
}
|
||||
|
||||
fn sqrt(&self) -> CtOption<Self> {
|
||||
// (p + 1) // 4, as valid since p % 4 == 3
|
||||
let mod_plus_one_div_four = $MODULUS.saturating_add(&U256::ONE).wrapping_div(&(4u8.into()));
|
||||
let res = self.pow(Self($ResidueType::new_checked(&mod_plus_one_div_four).unwrap()));
|
||||
CtOption::new(res, res.square().ct_eq(self))
|
||||
}
|
||||
|
||||
fn sqrt_ratio(num: &Self, div: &Self) -> (Choice, Self) {
|
||||
sqrt_ratio_generic(num, div)
|
||||
}
|
||||
}
|
||||
|
||||
impl PrimeField for $FieldName {
|
||||
type Repr = [u8; 32];
|
||||
|
||||
const MODULUS: &'static str = $MODULUS_STR;
|
||||
|
||||
const NUM_BITS: u32 = $NUM_BITS;
|
||||
const CAPACITY: u32 = $NUM_BITS - 1;
|
||||
|
||||
const TWO_INV: Self = $FieldName($ResidueType::new(&U256::from_u8(2)).invert().0);
|
||||
|
||||
const MULTIPLICATIVE_GENERATOR: Self =
|
||||
Self(Residue::new(&U256::from_u8($MULTIPLICATIVE_GENERATOR)));
|
||||
const S: u32 = $S;
|
||||
|
||||
const ROOT_OF_UNITY: Self = $FieldName(Residue::new(&U256::from_be_hex($ROOT_OF_UNITY)));
|
||||
const ROOT_OF_UNITY_INV: Self = Self(Self::ROOT_OF_UNITY.0.invert().0);
|
||||
|
||||
const DELTA: Self = $FieldName(Residue::new(&U256::from_be_hex($DELTA)));
|
||||
|
||||
fn from_repr(bytes: Self::Repr) -> CtOption<Self> {
|
||||
let res = U256::from_be_slice(&bytes);
|
||||
CtOption::new($FieldName(Residue::new(&res)), res.ct_lt(&$MODULUS))
|
||||
}
|
||||
fn to_repr(&self) -> Self::Repr {
|
||||
let mut repr = [0; 32];
|
||||
repr.copy_from_slice(&self.0.retrieve().to_be_bytes());
|
||||
repr
|
||||
}
|
||||
|
||||
fn is_odd(&self) -> Choice {
|
||||
self.0.retrieve().is_odd()
|
||||
}
|
||||
}
|
||||
|
||||
impl PrimeFieldBits for $FieldName {
|
||||
type ReprBits = [u8; 32];
|
||||
|
||||
fn to_le_bits(&self) -> FieldBits<Self::ReprBits> {
|
||||
let mut repr = [0; 32];
|
||||
repr.copy_from_slice(&self.0.retrieve().to_le_bytes());
|
||||
repr.into()
|
||||
}
|
||||
|
||||
fn char_le_bits() -> FieldBits<Self::ReprBits> {
|
||||
let mut repr = [0; 32];
|
||||
repr.copy_from_slice(&MODULUS.to_le_bytes());
|
||||
repr.into()
|
||||
}
|
||||
}
|
||||
|
||||
impl Sum<$FieldName> for $FieldName {
|
||||
fn sum<I: Iterator<Item = $FieldName>>(iter: I) -> $FieldName {
|
||||
let mut res = $FieldName::ZERO;
|
||||
for item in iter {
|
||||
res += item;
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Sum<&'a $FieldName> for $FieldName {
|
||||
fn sum<I: Iterator<Item = &'a $FieldName>>(iter: I) -> $FieldName {
|
||||
iter.cloned().sum()
|
||||
}
|
||||
}
|
||||
|
||||
impl Product<$FieldName> for $FieldName {
|
||||
fn product<I: Iterator<Item = $FieldName>>(iter: I) -> $FieldName {
|
||||
let mut res = $FieldName::ONE;
|
||||
for item in iter {
|
||||
res *= item;
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Product<&'a $FieldName> for $FieldName {
|
||||
fn product<I: Iterator<Item = &'a $FieldName>>(iter: I) -> $FieldName {
|
||||
iter.cloned().product()
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
47
crypto/evrf/secq256k1/src/lib.rs
Normal file
47
crypto/evrf/secq256k1/src/lib.rs
Normal file
@@ -0,0 +1,47 @@
|
||||
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||
#![doc = include_str!("../README.md")]
|
||||
|
||||
use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2};
|
||||
use ciphersuite::group::{ff::PrimeField, Group};
|
||||
|
||||
#[macro_use]
|
||||
mod backend;
|
||||
|
||||
mod scalar;
|
||||
pub use scalar::Scalar;
|
||||
|
||||
pub use k256::Scalar as FieldElement;
|
||||
|
||||
mod point;
|
||||
pub use point::Point;
|
||||
|
||||
/// Ciphersuite for Secq256k1.
|
||||
///
|
||||
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
|
||||
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
|
||||
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug, zeroize::Zeroize)]
|
||||
pub struct Secq256k1;
|
||||
impl ciphersuite::Ciphersuite for Secq256k1 {
|
||||
type F = Scalar;
|
||||
type G = Point;
|
||||
type H = blake2::Blake2b512;
|
||||
|
||||
const ID: &'static [u8] = b"secq256k1";
|
||||
|
||||
fn generator() -> Self::G {
|
||||
Point::generator()
|
||||
}
|
||||
|
||||
fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F {
|
||||
use blake2::Digest;
|
||||
Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Secq256k1 {
|
||||
type ScalarBits = U<{ Scalar::NUM_BITS as usize }>;
|
||||
type XCoefficients = Quot<Sum<Self::ScalarBits, U1>, U2>;
|
||||
type XCoefficientsMinusOne = Diff<Self::XCoefficients, U1>;
|
||||
type YxCoefficients = Diff<Quot<Sum<Sum<Self::ScalarBits, U1>, U1>, U2>, U2>;
|
||||
}
|
||||
414
crypto/evrf/secq256k1/src/point.rs
Normal file
414
crypto/evrf/secq256k1/src/point.rs
Normal file
@@ -0,0 +1,414 @@
|
||||
use core::{
|
||||
ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign},
|
||||
iter::Sum,
|
||||
};
|
||||
|
||||
use rand_core::RngCore;
|
||||
|
||||
use zeroize::Zeroize;
|
||||
use subtle::{Choice, CtOption, ConstantTimeEq, ConditionallySelectable, ConditionallyNegatable};
|
||||
|
||||
use generic_array::{typenum::U33, GenericArray};
|
||||
|
||||
use ciphersuite::group::{
|
||||
ff::{Field, PrimeField, PrimeFieldBits},
|
||||
Group, GroupEncoding,
|
||||
prime::PrimeGroup,
|
||||
};
|
||||
|
||||
use crate::{backend::u8_from_bool, Scalar, FieldElement};
|
||||
|
||||
fn recover_y(x: FieldElement) -> CtOption<FieldElement> {
|
||||
// x**3 + B since a = 0
|
||||
((x.square() * x) + FieldElement::from(7u64)).sqrt()
|
||||
}
|
||||
|
||||
/// Point.
|
||||
#[derive(Clone, Copy, Debug, Zeroize)]
|
||||
#[repr(C)]
|
||||
pub struct Point {
|
||||
x: FieldElement, // / Z
|
||||
y: FieldElement, // / Z
|
||||
z: FieldElement,
|
||||
}
|
||||
|
||||
impl ConstantTimeEq for Point {
|
||||
fn ct_eq(&self, other: &Self) -> Choice {
|
||||
let x1 = self.x * other.z;
|
||||
let x2 = other.x * self.z;
|
||||
|
||||
let y1 = self.y * other.z;
|
||||
let y2 = other.y * self.z;
|
||||
|
||||
(self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2))
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq for Point {
|
||||
fn eq(&self, other: &Point) -> bool {
|
||||
self.ct_eq(other).into()
|
||||
}
|
||||
}
|
||||
|
||||
impl Eq for Point {}
|
||||
|
||||
impl ConditionallySelectable for Point {
|
||||
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
|
||||
Point {
|
||||
x: FieldElement::conditional_select(&a.x, &b.x, choice),
|
||||
y: FieldElement::conditional_select(&a.y, &b.y, choice),
|
||||
z: FieldElement::conditional_select(&a.z, &b.z, choice),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Add for Point {
|
||||
type Output = Point;
|
||||
#[allow(non_snake_case)]
|
||||
fn add(self, other: Self) -> Self {
|
||||
// add-2015-rcb
|
||||
|
||||
let a = FieldElement::ZERO;
|
||||
let B = FieldElement::from(7u64);
|
||||
let b3 = B + B + B;
|
||||
|
||||
let X1 = self.x;
|
||||
let Y1 = self.y;
|
||||
let Z1 = self.z;
|
||||
let X2 = other.x;
|
||||
let Y2 = other.y;
|
||||
let Z2 = other.z;
|
||||
|
||||
let t0 = X1 * X2;
|
||||
let t1 = Y1 * Y2;
|
||||
let t2 = Z1 * Z2;
|
||||
let t3 = X1 + Y1;
|
||||
let t4 = X2 + Y2;
|
||||
let t3 = t3 * t4;
|
||||
let t4 = t0 + t1;
|
||||
let t3 = t3 - t4;
|
||||
let t4 = X1 + Z1;
|
||||
let t5 = X2 + Z2;
|
||||
let t4 = t4 * t5;
|
||||
let t5 = t0 + t2;
|
||||
let t4 = t4 - t5;
|
||||
let t5 = Y1 + Z1;
|
||||
let X3 = Y2 + Z2;
|
||||
let t5 = t5 * X3;
|
||||
let X3 = t1 + t2;
|
||||
let t5 = t5 - X3;
|
||||
let Z3 = a * t4;
|
||||
let X3 = b3 * t2;
|
||||
let Z3 = X3 + Z3;
|
||||
let X3 = t1 - Z3;
|
||||
let Z3 = t1 + Z3;
|
||||
let Y3 = X3 * Z3;
|
||||
let t1 = t0 + t0;
|
||||
let t1 = t1 + t0;
|
||||
let t2 = a * t2;
|
||||
let t4 = b3 * t4;
|
||||
let t1 = t1 + t2;
|
||||
let t2 = t0 - t2;
|
||||
let t2 = a * t2;
|
||||
let t4 = t4 + t2;
|
||||
let t0 = t1 * t4;
|
||||
let Y3 = Y3 + t0;
|
||||
let t0 = t5 * t4;
|
||||
let X3 = t3 * X3;
|
||||
let X3 = X3 - t0;
|
||||
let t0 = t3 * t1;
|
||||
let Z3 = t5 * Z3;
|
||||
let Z3 = Z3 + t0;
|
||||
Point { x: X3, y: Y3, z: Z3 }
|
||||
}
|
||||
}
|
||||
|
||||
impl AddAssign for Point {
|
||||
fn add_assign(&mut self, other: Point) {
|
||||
*self = *self + other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Add<&Point> for Point {
|
||||
type Output = Point;
|
||||
fn add(self, other: &Point) -> Point {
|
||||
self + *other
|
||||
}
|
||||
}
|
||||
|
||||
impl AddAssign<&Point> for Point {
|
||||
fn add_assign(&mut self, other: &Point) {
|
||||
*self += *other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Neg for Point {
|
||||
type Output = Point;
|
||||
fn neg(self) -> Self {
|
||||
Point { x: self.x, y: -self.y, z: self.z }
|
||||
}
|
||||
}
|
||||
|
||||
impl Sub for Point {
|
||||
type Output = Point;
|
||||
#[allow(clippy::suspicious_arithmetic_impl)]
|
||||
fn sub(self, other: Self) -> Self {
|
||||
self + other.neg()
|
||||
}
|
||||
}
|
||||
|
||||
impl SubAssign for Point {
|
||||
fn sub_assign(&mut self, other: Point) {
|
||||
*self = *self - other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Sub<&Point> for Point {
|
||||
type Output = Point;
|
||||
fn sub(self, other: &Point) -> Point {
|
||||
self - *other
|
||||
}
|
||||
}
|
||||
|
||||
impl SubAssign<&Point> for Point {
|
||||
fn sub_assign(&mut self, other: &Point) {
|
||||
*self -= *other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Group for Point {
|
||||
type Scalar = Scalar;
|
||||
fn random(mut rng: impl RngCore) -> Self {
|
||||
loop {
|
||||
let mut bytes = GenericArray::default();
|
||||
rng.fill_bytes(bytes.as_mut());
|
||||
let opt = Self::from_bytes(&bytes);
|
||||
if opt.is_some().into() {
|
||||
return opt.unwrap();
|
||||
}
|
||||
}
|
||||
}
|
||||
fn identity() -> Self {
|
||||
Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO }
|
||||
}
|
||||
fn generator() -> Self {
|
||||
Point {
|
||||
x: FieldElement::from_repr(
|
||||
hex_literal::hex!("0000000000000000000000000000000000000000000000000000000000000001")
|
||||
.into(),
|
||||
)
|
||||
.unwrap(),
|
||||
y: FieldElement::from_repr(
|
||||
hex_literal::hex!("0C7C97045A2074634909ABDF82C9BD0248916189041F2AF0C1B800D1FFC278C0")
|
||||
.into(),
|
||||
)
|
||||
.unwrap(),
|
||||
z: FieldElement::ONE,
|
||||
}
|
||||
}
|
||||
fn is_identity(&self) -> Choice {
|
||||
self.z.ct_eq(&FieldElement::ZERO)
|
||||
}
|
||||
#[allow(non_snake_case)]
|
||||
fn double(&self) -> Self {
|
||||
// dbl-2007-bl
|
||||
|
||||
let a = FieldElement::ZERO;
|
||||
|
||||
let X1 = self.x;
|
||||
let Y1 = self.y;
|
||||
let Z1 = self.z;
|
||||
|
||||
let XX = X1 * X1;
|
||||
let ZZ = Z1 * Z1;
|
||||
let w = (a * ZZ) + XX.double() + XX;
|
||||
let s = (Y1 * Z1).double();
|
||||
let ss = s * s;
|
||||
let sss = s * ss;
|
||||
let R = Y1 * s;
|
||||
let RR = R * R;
|
||||
let B = X1 + R;
|
||||
let B = (B * B) - XX - RR;
|
||||
let h = (w * w) - B.double();
|
||||
let X3 = h * s;
|
||||
let Y3 = w * (B - h) - RR.double();
|
||||
let Z3 = sss;
|
||||
|
||||
let res = Self { x: X3, y: Y3, z: Z3 };
|
||||
// If self is identity, res will not be well-formed
|
||||
// Accordingly, we return self if self was the identity
|
||||
Self::conditional_select(&res, self, self.is_identity())
|
||||
}
|
||||
}
|
||||
|
||||
impl Sum<Point> for Point {
|
||||
fn sum<I: Iterator<Item = Point>>(iter: I) -> Point {
|
||||
let mut res = Self::identity();
|
||||
for i in iter {
|
||||
res += i;
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Sum<&'a Point> for Point {
|
||||
fn sum<I: Iterator<Item = &'a Point>>(iter: I) -> Point {
|
||||
Point::sum(iter.cloned())
|
||||
}
|
||||
}
|
||||
|
||||
impl Mul<Scalar> for Point {
|
||||
type Output = Point;
|
||||
fn mul(self, mut other: Scalar) -> Point {
|
||||
// Precompute the optimal amount that's a multiple of 2
|
||||
let mut table = [Point::identity(); 16];
|
||||
table[1] = self;
|
||||
for i in 2 .. 16 {
|
||||
table[i] = table[i - 1] + self;
|
||||
}
|
||||
|
||||
let mut res = Self::identity();
|
||||
let mut bits = 0;
|
||||
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
|
||||
bits <<= 1;
|
||||
let mut bit = u8_from_bool(bit.deref_mut());
|
||||
bits |= bit;
|
||||
bit.zeroize();
|
||||
|
||||
if ((i + 1) % 4) == 0 {
|
||||
if i != 3 {
|
||||
for _ in 0 .. 4 {
|
||||
res = res.double();
|
||||
}
|
||||
}
|
||||
|
||||
let mut term = table[0];
|
||||
for (j, candidate) in table[1 ..].iter().enumerate() {
|
||||
let j = j + 1;
|
||||
term = Self::conditional_select(&term, candidate, usize::from(bits).ct_eq(&j));
|
||||
}
|
||||
res += term;
|
||||
bits = 0;
|
||||
}
|
||||
}
|
||||
other.zeroize();
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl MulAssign<Scalar> for Point {
|
||||
fn mul_assign(&mut self, other: Scalar) {
|
||||
*self = *self * other;
|
||||
}
|
||||
}
|
||||
|
||||
impl Mul<&Scalar> for Point {
|
||||
type Output = Point;
|
||||
fn mul(self, other: &Scalar) -> Point {
|
||||
self * *other
|
||||
}
|
||||
}
|
||||
|
||||
impl MulAssign<&Scalar> for Point {
|
||||
fn mul_assign(&mut self, other: &Scalar) {
|
||||
*self *= *other;
|
||||
}
|
||||
}
|
||||
|
||||
impl GroupEncoding for Point {
|
||||
type Repr = GenericArray<u8, U33>;
|
||||
|
||||
fn from_bytes(bytes: &Self::Repr) -> CtOption<Self> {
|
||||
// Extract and clear the sign bit
|
||||
let sign = Choice::from(bytes[0] & 1);
|
||||
|
||||
// Parse x, recover y
|
||||
FieldElement::from_repr(*GenericArray::from_slice(&bytes[1 ..])).and_then(|x| {
|
||||
let is_identity = x.is_zero();
|
||||
|
||||
let y = recover_y(x).map(|mut y| {
|
||||
y.conditional_negate(y.is_odd().ct_eq(&!sign));
|
||||
y
|
||||
});
|
||||
|
||||
// If this the identity, set y to 1
|
||||
let y =
|
||||
CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity);
|
||||
// Create the point if we have a y solution
|
||||
let point = y.map(|y| Point { x, y, z: FieldElement::ONE });
|
||||
|
||||
let not_negative_zero = !(is_identity & sign);
|
||||
// Only return the point if it isn't -0 and the sign byte wasn't malleated
|
||||
CtOption::conditional_select(
|
||||
&CtOption::new(Point::identity(), 0.into()),
|
||||
&point,
|
||||
not_negative_zero & ((bytes[0] & 1).ct_eq(&bytes[0])),
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
fn from_bytes_unchecked(bytes: &Self::Repr) -> CtOption<Self> {
|
||||
Point::from_bytes(bytes)
|
||||
}
|
||||
|
||||
fn to_bytes(&self) -> Self::Repr {
|
||||
let Some(z) = Option::<FieldElement>::from(self.z.invert()) else {
|
||||
return *GenericArray::from_slice(&[0; 33]);
|
||||
};
|
||||
let x = self.x * z;
|
||||
let y = self.y * z;
|
||||
|
||||
let mut res = *GenericArray::from_slice(&[0; 33]);
|
||||
res[1 ..].as_mut().copy_from_slice(&x.to_repr());
|
||||
|
||||
// The following conditional select normalizes the sign to 0 when x is 0
|
||||
let y_sign = u8::conditional_select(&y.is_odd().unwrap_u8(), &0, x.ct_eq(&FieldElement::ZERO));
|
||||
res[0] |= y_sign;
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
impl PrimeGroup for Point {}
|
||||
|
||||
impl ec_divisors::DivisorCurve for Point {
|
||||
type FieldElement = FieldElement;
|
||||
|
||||
fn a() -> Self::FieldElement {
|
||||
FieldElement::from(0u64)
|
||||
}
|
||||
fn b() -> Self::FieldElement {
|
||||
FieldElement::from(7u64)
|
||||
}
|
||||
|
||||
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
|
||||
let z: Self::FieldElement = Option::from(point.z.invert())?;
|
||||
Some((point.x * z, point.y * z))
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_curve() {
|
||||
ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generator() {
|
||||
assert_eq!(
|
||||
Point::generator(),
|
||||
Point::from_bytes(GenericArray::from_slice(&hex_literal::hex!(
|
||||
"000000000000000000000000000000000000000000000000000000000000000001"
|
||||
)))
|
||||
.unwrap()
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_x_is_invalid() {
|
||||
assert!(Option::<FieldElement>::from(recover_y(FieldElement::ZERO)).is_none());
|
||||
}
|
||||
|
||||
// Checks random won't infinitely loop
|
||||
#[test]
|
||||
fn random() {
|
||||
Point::random(&mut rand_core::OsRng);
|
||||
}
|
||||
52
crypto/evrf/secq256k1/src/scalar.rs
Normal file
52
crypto/evrf/secq256k1/src/scalar.rs
Normal file
@@ -0,0 +1,52 @@
|
||||
use zeroize::{DefaultIsZeroes, Zeroize};
|
||||
|
||||
use crypto_bigint::{
|
||||
U256, U512,
|
||||
modular::constant_mod::{ResidueParams, Residue},
|
||||
};
|
||||
|
||||
const MODULUS_STR: &str = "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F";
|
||||
|
||||
impl_modulus!(SecQ, U256, MODULUS_STR);
|
||||
type ResidueType = Residue<SecQ, { SecQ::LIMBS }>;
|
||||
|
||||
/// The Scalar field of secq256k1.
|
||||
///
|
||||
/// This is equivalent to the field secp256k1 is defined over.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)]
|
||||
#[repr(C)]
|
||||
pub struct Scalar(pub(crate) ResidueType);
|
||||
|
||||
impl DefaultIsZeroes for Scalar {}
|
||||
|
||||
pub(crate) const MODULUS: U256 = U256::from_be_hex(MODULUS_STR);
|
||||
|
||||
const WIDE_MODULUS: U512 = U512::from_be_hex(concat!(
|
||||
"0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F",
|
||||
));
|
||||
|
||||
field!(
|
||||
Scalar,
|
||||
ResidueType,
|
||||
MODULUS_STR,
|
||||
MODULUS,
|
||||
WIDE_MODULUS,
|
||||
256,
|
||||
3,
|
||||
1,
|
||||
"fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2e",
|
||||
"0000000000000000000000000000000000000000000000000000000000000009",
|
||||
);
|
||||
|
||||
impl Scalar {
|
||||
/// Perform a wide reduction, presumably to obtain a non-biased Scalar field element.
|
||||
pub fn wide_reduce(bytes: [u8; 64]) -> Scalar {
|
||||
Scalar(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref()))))
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_scalar_field() {
|
||||
ff_group_tests::prime_field::test_prime_field_bits::<_, Scalar>(&mut rand_core::OsRng);
|
||||
}
|
||||
Reference in New Issue
Block a user