220 Commits

Author SHA1 Message Date
Luke Parker
1de8136739 Remove Session from VariantSignId::SlashReport
It's only there to make the VariantSignid unique across Sessions. By localizing
the VariantSignid to a Session, we avoid this, and can better ensure we don't
queue work for historic sessions.
2024-12-30 06:16:03 -05:00
Luke Parker
445c49f030 Have the scanner's report task ensure handovers only occur if Batchs are valid
This is incomplete at this time. The logic is fine, but needs to be moved to a
distinct location to handle singular blocks which produce multiple Batches.
2024-12-30 06:11:47 -05:00
Luke Parker
5b74fc8ac1 Merge ExternalKeyForSessionToSignBatch into InfoForBatch 2024-12-30 05:34:13 -05:00
Luke Parker
e67e301fc2 Have the processor verify the published Batches match expectations 2024-12-30 05:21:26 -05:00
Luke Parker
1d50792eed Document serai-db with bounds and intent 2024-12-26 02:35:32 -05:00
Luke Parker
9c92709e62 Delay cosign acknowledgments 2024-12-26 01:04:20 -05:00
Luke Parker
3d15710a43 Only check the cosign is after its start block if faulty
We don't have consensus on the session's last block, so we shouldn't check if
the cosign is before the session ends. What matters is that network, within its
set, claims it's still active at that block (on its view of the blockchain).
2024-12-26 00:26:48 -05:00
Luke Parker
df06da5552 Only check if the cosign is stale if it isn't faulty
If it is faulty, we want to archive it regardless.
2024-12-26 00:24:48 -05:00
Luke Parker
cef5bc95b0 Revert prior commit
An archive of all GlobalSessions is necessary to check for faults. The storage
cost is also minimal. While it should be avoided if it can be, it can't be
here.
2024-12-26 00:15:49 -05:00
Luke Parker
f336ab1ece Remove GlobalSessions DB entry
If we read the currently-being-evaluated session from the evaluator, we can
avoid paying the storage costs on all sessions ad-infinitum.
2024-12-25 23:57:51 -05:00
Luke Parker
2aebfb21af Remove serai from the cosign evaluator 2024-12-25 23:47:21 -05:00
Luke Parker
56af6c44eb Remove usage of serai from intake_cosign 2024-12-25 21:19:04 -05:00
Luke Parker
4b34be05bf rocksdb 0.23 2024-12-25 19:48:48 -05:00
Luke Parker
5b337c3ce8 Prevent a malicious validator set from overwriting a notable cosign
Also prevents panics from an invalid Serai node (removing the assumption of an
honest Serai node).
2024-12-25 02:11:05 -05:00
Luke Parker
e119fb4c16 Replace Cosigns by extending NetworksLatestCosignedBlock
Cosigns was an archive of every single cosign ever received. By scoping
NetworksLatestCosignedBlock to be by the global session, we have the latest
cosign for each network in a session (valid to replace all prior cosigns by
that network within that session, even for the purposes of fault) and
automatically have the notable cosigns indexed (as they are the latest ones
within their session). This not only saves space yet also allows optimizing
evaluation a bit.
2024-12-25 01:45:37 -05:00
Luke Parker
ef972b2658 Add cosign signature verification 2024-12-25 00:06:46 -05:00
Luke Parker
4de1a5804d Dedicated library for intending and evaluating cosigns
Not only cleans the existing cosign code but enables non-Serai-coordinators to
evaluate cosigns if they gain access to a feed of them (such as over an RPC).
This would let centralized services not only track the finalized chain yet the
cosigned chain without directly running a coordinator.

Still being wrapped up.
2024-12-22 06:41:55 -05:00
Luke Parker
147a6e43d0 Split task from serai-processor-primitives into serai-task 2024-12-19 10:08:13 -05:00
Luke Parker
066aa9eda4 cargo update
Resolves RUSTSEC-2024-0421
2024-12-12 00:45:19 -05:00
Luke Parker
9593a428e3 alloy 0.8 2024-12-11 01:02:58 -05:00
Luke Parker
5b3c5ec02b Basic Ethereum escapeHatch test 2024-12-09 02:00:17 -05:00
Luke Parker
9ccfa8a9f5 Fix deny 2024-12-08 22:01:43 -05:00
Luke Parker
18897978d0 thiserror 2.0, cargo update 2024-12-08 21:55:37 -05:00
Luke Parker
3192370484 Add Serai key confirmation to prevent rotating to an unusable key
Also updates alloy to the latest version
2024-12-08 20:42:37 -05:00
Luke Parker
8013c56195 Add/correct msrv labels 2024-12-08 18:27:15 -05:00
Luke Parker
834c16930b Add a bitmask of OutInstruction events to Executed
Allows explorers to provide clarity on what occurred.
2024-11-02 21:00:01 -04:00
Luke Parker
2920987173 Add a re-entrancy guard to Router.execute 2024-11-02 20:12:48 -04:00
Luke Parker
26230377b0 Define IRouterWithoutCollisions which Router inherits from
This ensures Router implements most of IRouterWithoutCollisions. It solely
leaves us to confirm Router implements the extensions defined in IRouter.
2024-11-02 19:10:39 -04:00
Luke Parker
2f5c0c68d0 Add selector collisions to Router to make it IRouter compatible 2024-11-02 18:13:02 -04:00
Luke Parker
8de42cc2d4 Add IRouter 2024-11-02 13:19:07 -04:00
Luke Parker
cf4123b0f8 Update how signatures are handled by the Router 2024-11-02 10:47:09 -04:00
Luke Parker
6a520a7412 Work on testing the Router 2024-10-31 02:23:59 -04:00
Luke Parker
b2ec58a445 Update serai-ethereum-processor to compile 2024-10-30 21:48:40 -04:00
Luke Parker
8e800885fb Simplify deterministic signing process in serai-processor-ethereum-primitives
This should be easier to specify/do an alternative implementation of.
2024-10-30 21:36:31 -04:00
Luke Parker
2a427382f1 Natspec, slither Deployer, Router 2024-10-30 21:35:43 -04:00
Luke Parker
ce1689b325 Expand tests for ethereum-schnorr-contract 2024-10-28 18:08:31 -04:00
Luke Parker
0b61a75afc Add lint against string slicing
These are tricky as it panics if the slice doesn't hit a UTF-8 codepoint
boundary.
2024-10-02 21:58:48 -04:00
Luke Parker
2aee21e507 Fix decomposition -> divisor points vartime due to branch prediction/cache rules 2024-09-29 04:19:16 -04:00
Luke Parker
b3e003bd5d cargo +nightly fmt 2024-09-25 10:22:49 -04:00
Luke Parker
251a6e96e8 Constant-time divisors (#617)
* WIP constant-time implementation of the ec-divisors library

* Fix misc logic errors in poly.rs

* Remove accidentally committed test statements

* Fix ConstantTimeEq for CoefficientIndex

* Correct the iterations formula

x**3 / (0 y + x**1) would prior be considered indivisible with iterations = 0.
It is divisible however. The amount of iterations should be the amount of
coefficients within the numerator *excluding the coefficient for y**0 x**0*.

* Poly PartialEq, conditional_select_poly which checks poly structure equivalence

If the first passed argument is smaller than the latter, it's padded to the
necessary length.

Also adds code to trim the remainder as the remainder is the value modulo, so
it's very important it remains concise and workable.

* Fix the line function

It selected the case if both were identity before selecting the case if either
were identity, the latter overwriting the former.

* Final fixes re: ct_get

1) Our quotient structure does need to be of size equal to the numerator
   entirely to prevent out-of-bounds reads on it
2) We need to get from yx_coefficients if of length >=, so if the length is 1
   we can read y_pow=1 from it. If y_pow=0, and its length is 0 so it has no
   inner Vecs, we need to fall back with the guard y_pow != 0.

* Add a trim algorithm to lib.rs to prevent Polys from becoming unbearably gigantic

Our Poly algorithm is incredibly leaky. While it presumably should be improved,
we can take advantage of our known structure while constructing divisors (and
the small modulus) to simply trim out the zero coefficients leaked. This
maintains Polys in a manageable size.

* Move constant-time scalar mul gadget divisor creation from dkg to ec-divisors

Anyone creating a divisor for the scalar mul gadget should use constant time
code, so this code should at least be in the EC gadgets crate It's of
non-trivial complexity to deal with otherwise.

* Remove unsafe, cache timing attacks from ec-divisors
2024-09-24 17:27:05 -04:00
Luke Parker
2c8af04781 machete, drain > mem::swap for clarity reasons 2024-09-19 23:36:32 -07:00
Luke Parker
a0ed043372 Move old processor/src directory to processor/TODO 2024-09-19 23:36:32 -07:00
Luke Parker
2984d2f8cf Misc comments 2024-09-19 23:36:32 -07:00
Luke Parker
554c5778e4 Don't track deployment block in the Router
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we
did sync to that point), then check if the Router was deployed, yet at that
very moment the node resets to genesis. By ensuring the Router is deployed, we
avoid this (and don't need to track the deployment block in-contract).

Also uses a JoinSet to sync the 32 blocks in parallel.
2024-09-19 23:36:32 -07:00
Luke Parker
7e4c59a0a3 Have the Router track its deployment block
Prevents a consensus split where some nodes would drop transfers if their node
didn't think the Router was deployed, and some would handle them.
2024-09-19 23:36:32 -07:00
Luke Parker
294462641e Don't have the ERC20 collapse the top-level transfer ID to the transaction ID
Uses the ID of the transfer event associated with the top-level transfer.
2024-09-19 23:36:32 -07:00
Luke Parker
ae76749513 Transfer ETH with CREATE, not prior to CREATE
Saves a few thousand gas.
2024-09-19 23:36:32 -07:00
Luke Parker
1e1b821d34 Report a Change Output with every Eventuality to ensure we don't fall out of synchrony 2024-09-19 23:36:32 -07:00
Luke Parker
702b4c860c Add dummy fee values to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bc1bbf9951 Set a fixed fee transferred to the caller for publication
Avoids the risk of the gas used by the contract exceeding the gas presumed to
be used (causing an insolvency).
2024-09-19 23:36:32 -07:00
Luke Parker
ec9211fd84 Remove accidentally included bitcoin feature from processor-bin 2024-09-19 23:36:32 -07:00
Luke Parker
4292660eda Have the Ethereum scheduler create Batches as necessary
Also introduces the fee logic, despite it being stubbed.
2024-09-19 23:36:32 -07:00
Luke Parker
8ea5acbacb Update the Router smart contract to pay fees to the caller
The caller is paid a fixed fee per unit of gas spent. That arguably
incentivizes the publisher to raise the gas used by internal calls, yet this
doesn't effect the user UX as they'll have flatly paid the worst-case fee
already. It does pose a risk where callers are arguably incentivized to cause
transaction failures which consume all the gas, not just increased gas, yet:

1) Modern smart contracts don't error by consuming all the gas
2) This is presumably infeasible
3) Even if it was feasible, the gas fees gained presumably exceed the gas fees
   spent causing the failure

The benefit to only paying the callers for the gas used, not the gas alotted,
is it allows Serai to build up a buffer. While this should be minor, a few
cents on every transaction at best, if we ever do have any costs slip through
the cracks, it ideally is sufficient to handle those.
2024-09-19 23:36:32 -07:00
Luke Parker
1b1aa74770 Correct forge fmt config 2024-09-19 23:36:32 -07:00
Luke Parker
861a8352e5 Update to the latest bitcoin-serai 2024-09-19 23:36:32 -07:00
Luke Parker
e64827b6d7 Mark files in TODO/ with "TODO" to ensure it pops up on search 2024-09-19 23:36:32 -07:00
Luke Parker
c27aaf8658 Merge BlockWithAcknowledgedBatch and BatchWithoutAcknowledgeBatch
Offers a simpler API to the coordinator.
2024-09-19 23:36:32 -07:00
Luke Parker
53567e91c8 Read NetworkId from ScannerFeed trait, not env 2024-09-19 23:36:32 -07:00
Luke Parker
1a08d50e16 Remove unused code in the Ethereum processor 2024-09-19 23:36:32 -07:00
Luke Parker
855e53164e Finish Ethereum ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
1367e41510 Add hooks to the main loop
Lets the Ethereum processor track the first key set as soon as it's set.
2024-09-19 23:36:32 -07:00
Luke Parker
a691be21c8 Call tidy_keys upon queue_key
Prevents the potential case of the substrate task and the scan task writing to
the same storage slot at once.
2024-09-19 23:36:32 -07:00
Luke Parker
673cf8fd47 Pass the latest active key to the Block's scan function
Effectively necessary for networks on which we utilize account abstraction in
order to know what key to associate the received coins with.
2024-09-19 23:36:32 -07:00
Luke Parker
118d81bc90 Finish the Ethereum TX publishing code 2024-09-19 23:36:32 -07:00
Luke Parker
e75c4ec6ed Explicitly add an unspendable script path to the processor's generated keys 2024-09-19 23:36:32 -07:00
Luke Parker
9e628d217f cargo fmt, move ScannerFeed from String to the RPC error 2024-09-19 23:36:32 -07:00
Luke Parker
a717ae9ea7 Have the TransactionPublisher build a TxLegacy from Transaction 2024-09-19 23:36:32 -07:00
Luke Parker
98c3f75fa2 Move the Ethereum Action machine to its own file 2024-09-19 23:36:32 -07:00
Luke Parker
18178f3764 Add note on the returned top-level transfers being unordered 2024-09-19 23:36:32 -07:00
Luke Parker
bdc3bda04a Remove ethereum-serai/serai-processor-ethereum-contracts
contracts was smashed out of ethereum-serai. Both have now been smashed into
individual crates.

Creates a TODO directory with left-over test code yet to be moved.
2024-09-19 23:36:32 -07:00
Luke Parker
433beac93a Ethereum SignableTransaction, Eventuality 2024-09-19 23:36:32 -07:00
Luke Parker
8f2a9301cf Don't have the router drop transactions which may have top-level transfers
The router will now match the top-level transfer so it isn't used as the
justification for the InInstruction it's handling. This allows the theoretical
case where a top-level transfer occurs (to any entity) and an internal call
performs a transfer to Serai.

Also uses a JoinSet for fetching transactions' top-level transfers in the ERC20
crate. This does add a dependency on tokio yet improves performance, and it's
scoped under serai-processor (which is always presumed to be tokio-based).
While we could instead import futures for join_all,
https://github.com/smol-rs/futures-lite/issues/6 summarizes why that wouldn't
be a good idea. While we could prefer async-executor over tokio's JoinSet,
JoinSet doesn't share the same issues as FuturesUnordered. That means our
question is solely if we want the async-executor executor or the tokio
executor, when we've already established the Serai processor is always presumed
to be tokio-based.
2024-09-19 23:36:32 -07:00
Luke Parker
d21034c349 Add calls to get the messages to sign for the router 2024-09-19 23:36:32 -07:00
Luke Parker
381495618c Trim dead code 2024-09-19 23:36:32 -07:00
Luke Parker
ee0efe7cde Don't have the Deployer store the deployment block
Also updates how re-entrancy is handled to a more efficient and portable
mechanism.
2024-09-19 23:36:32 -07:00
Luke Parker
7feb7aed22 Hash the message before the challenge function in the Schnorr contract
Slightly more efficient.
2024-09-19 23:36:32 -07:00
Luke Parker
cc75a92641 Smash out the router library 2024-09-19 23:36:32 -07:00
Luke Parker
a7d5640642 Smash ERC20 into its own library 2024-09-19 23:36:32 -07:00
Luke Parker
ae61f3d359 forge fmt 2024-09-19 23:36:32 -07:00
Luke Parker
4bcea31c2a Break Ethereum Deployer into crate 2024-09-19 23:36:32 -07:00
Luke Parker
eb9bce6862 Remove OutInstruction's data field
It makes sense for networks which support arbitrary data to do as part of their
address. This reduces the ability to perform DoSs, achieves better performance,
and better uses the type system (as now networks we don't support data on don't
have a data field).

Updates the Ethereum address definition in serai-client accordingly
2024-09-19 23:36:32 -07:00
Luke Parker
39be23d807 Remove artifacts for serai-processor-ethereum-contracts 2024-09-19 23:36:32 -07:00
Luke Parker
3f0f4d520d Remove the Sandbox contract
If instead of intaking calls, we intake code, we can deploy a fresh contract
which makes arbitrary calls *without* attempting to build our abstraction
layer over the concept.

This should have the same gas costs, as we still have one contract deployment.
The new contract only has a constructor, so it should have no actual code and
beat the Sandbox in that regard? We do have to call into ourselves to meter the
gas, yet we already had to call into the deployed Sandbox to achieve that.

Also re-defines the OutInstruction to include tokens, implements
OutInstruction-specified gas amounts, bumps the Solidity version, and other
such misc changes.
2024-09-19 23:36:32 -07:00
Luke Parker
80ca2b780a Add tests for the premise of the Schnorr contract to the Schnorr crate 2024-09-19 23:36:32 -07:00
Luke Parker
0813351f1f OUT_DIR > artifacts 2024-09-19 23:36:32 -07:00
Luke Parker
a38d135059 rust-toolchain 1.81 2024-09-19 23:36:32 -07:00
Luke Parker
67f9f76fdf Remove publish = false 2024-09-19 23:36:32 -07:00
Luke Parker
1c5bc2259e Dedicated crate for the Schnorr contract 2024-09-19 23:36:32 -07:00
Luke Parker
bdf89f5350 Add dedicated crate for building Solidity contracts 2024-09-19 23:36:32 -07:00
Luke Parker
239127aae5 Add crate for the Ethereum contracts 2024-09-19 23:36:32 -07:00
Luke Parker
d9543bee40 Move ethereum-serai under the processor
It isn't generally usable and should be directly integrated at this point.
2024-09-19 23:36:32 -07:00
Luke Parker
8746b54a43 Don't use a different address for DAI in test
anvil will let us deploy to the existing address.
2024-09-19 23:36:32 -07:00
Luke Parker
7761798a78 Outline the Ethereum processor
This was only half-finished to begin with, unfortunately...
2024-09-19 23:36:32 -07:00
Luke Parker
72a18bf8bb Smart Contract Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0616085109 Monero Planner
Finishes the Monero processor.
2024-09-19 23:36:32 -07:00
Luke Parker
e23176deeb Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-19 23:36:32 -07:00
Luke Parker
5551521e58 Tighten documentation on Block::number 2024-09-19 23:36:32 -07:00
Luke Parker
a2d9aeaed7 Stub out Scheduler in the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
e1ad897f7e Allow scheduler's creation of transactions to be async and error
I don't love this, but it's the only way to select decoys without using a local
database. While the prior commit added such a databse, the performance of it
presumably wasn't viable, and while TODOs marked the needed improvements, it
was still messy with an immense scope re: any auditing.

The relevant scheduler functions now take `&self` (intentional, as all
mutations should be via the `&mut impl DbTxn` passed). The calls to `&self` are
expected to be completely deterministic (as usual).
2024-09-19 23:36:32 -07:00
Luke Parker
2edc2f3612 Add a database of all Monero outs into the processor
Enables synchronous transaction creation (which requires synchronous decoy
selection).
2024-09-19 23:36:32 -07:00
Luke Parker
e56af7fc51 Monero time_for_block, dust 2024-09-19 23:36:32 -07:00
Luke Parker
947e1067d9 Monero Processor scan, check_for_eventuality_resolutions 2024-09-19 23:36:32 -07:00
Luke Parker
b4e94f3d51 cargo fmt signers/scanner 2024-09-19 23:36:32 -07:00
Luke Parker
1b39138472 Define subaddress indexes to use
(1, 0) is the external address. (2, *) are the internal addresses.
2024-09-19 23:36:32 -07:00
Luke Parker
e78236276a Remove async-trait from processor/
Part of https://github.com/serai-dex/issues/607.
2024-09-19 23:36:32 -07:00
Luke Parker
2c4c33e632 Misc continuances on the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
02409c5735 Correct Multisig Rotation to use WINDOW_LENGTH where proper 2024-09-19 23:36:32 -07:00
Luke Parker
f2cf03cedf Monero processor primitives 2024-09-19 23:36:32 -07:00
Luke Parker
0d4c8cf032 Use a local DB channel for sending to the message-queue
The provided message-queue queue functions runs unti it succeeds. This means
sending to the message-queue will no longer potentially block for arbitrary
amount of times as sending messages is just writing them to a DB.
2024-09-19 23:36:32 -07:00
Luke Parker
b6811f9015 serai-processor-bin
Moves the coordinator loop out of serai-bitcoin-processor, completing it.

Fixes a potential race condition in the message-queue regarding multiple
sockets sending messages at once.
2024-09-19 23:36:32 -07:00
Luke Parker
fcd5fb85df Add binary search to find the block to start scanning from 2024-09-19 23:36:32 -07:00
Luke Parker
3ac0265f07 Add section documenting the safety of txindex upon reorganizations 2024-09-19 23:36:32 -07:00
Luke Parker
9b8c8f8231 Misc tidying of serai-db calls 2024-09-19 23:36:32 -07:00
Luke Parker
59fa49f750 Continue filling out main loop
Adds generics to the db_channel macro, fixes the bug where it needed at least
one key.
2024-09-19 23:36:32 -07:00
Luke Parker
723f529659 Note better message structure in messages 2024-09-19 23:36:32 -07:00
Luke Parker
73af09effb Add note to signers on reducing disk IO 2024-09-19 23:36:32 -07:00
Luke Parker
4054e44471 Start on the new processor main loop 2024-09-19 23:36:32 -07:00
Luke Parker
a8159e9070 Bitcoin Key Gen 2024-09-19 23:36:32 -07:00
Luke Parker
b61ba9d1bb Adjust Bitcoin processor layout 2024-09-19 23:36:32 -07:00
Luke Parker
776cbbb9a4 Misc changes in response to prior two commits 2024-09-19 23:36:32 -07:00
Luke Parker
76a3f3ec4b Add an anyone-can-pay output to every Bitcoin transaction
Resolves #284.
2024-09-19 23:36:32 -07:00
Luke Parker
93c7d06684 Implement presumed_origin
Before we yield a block for scanning, we save all of the contained script
public keys. Then, when we want the address credited for creating an output,
we read the script public key of the spent output from the database.

Fixes #559.
2024-09-19 23:36:32 -07:00
Luke Parker
4cb838e248 Bitcoin processor lib.rs -> main.rs 2024-09-19 23:36:32 -07:00
Luke Parker
c988b7cdb0 Bitcoin TransactionPublisher 2024-09-19 23:36:32 -07:00
Luke Parker
017aab2258 Satisfy Scheduler for Bitcoin 2024-09-19 23:36:32 -07:00
Luke Parker
ba3a6f9e91 Bitcoin ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
e36b671f37 Remove bound that WINDOW_LENGTH < CONFIRMATIONS
It's unnecessary and not valuable.
2024-09-19 23:36:32 -07:00
Luke Parker
2d4b775b6e Add bitcoin Block trait impl 2024-09-19 23:36:32 -07:00
Luke Parker
247cc8f0cc Bitcoin Output/Transaction definitions 2024-09-19 23:36:32 -07:00
Luke Parker
0ccf71df1e Remove old signer impls 2024-09-19 23:36:32 -07:00
Luke Parker
8aba71b9c4 Add CosignerTask to signers, completing it 2024-09-19 23:36:32 -07:00
Luke Parker
46c12c0e66 SlashReport signing and signature publication 2024-09-19 23:36:32 -07:00
Luke Parker
3cc7b49492 Strongly type SlashReport, populate cosign/slash report tasks with work 2024-09-19 23:36:32 -07:00
Luke Parker
0078858c1c Tidy messages, publish all Batches to the coordinator
Prior, we published SignedBatches, yet Batches are necessary for auditing
purposes.
2024-09-19 23:36:32 -07:00
Luke Parker
a3cb514400 Have the coordinator task publish Batches 2024-09-19 23:36:32 -07:00
Luke Parker
ed0221d804 Add BatchSignerTask
Uses a wrapper around AlgorithmMachine Schnorrkel to let the message be &[].
2024-09-19 23:36:32 -07:00
Luke Parker
4152bcacb2 Replace scanner's BatchPublisher with a pair of DB channels 2024-09-19 23:36:32 -07:00
Luke Parker
f07ec7bee0 Route the coordinator, fix race conditions in the signers library 2024-09-19 23:36:32 -07:00
Luke Parker
7484eadbbb Expand task management
These extensions are necessary for the signers task management.
2024-09-19 23:36:32 -07:00
Luke Parker
59ff944152 Work on the higher-level signers API 2024-09-19 23:36:32 -07:00
Luke Parker
8f848b1abc Tidy transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
100c80be9f Finish transaction signing task with TX rebroadcast code 2024-09-19 23:36:32 -07:00
Luke Parker
a353f9e2da Further work on transaction signing 2024-09-19 23:36:32 -07:00
Luke Parker
b62fc3a1fa Minor work on the transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
8380653855 Add empty serai-processor-signers library
This will replace the signers still in the monolithic Processor binary.
2024-09-19 23:36:32 -07:00
Luke Parker
b50b889918 Split processor into bitcoin-processor, ethereum-processor, monero-processor 2024-09-19 23:36:32 -07:00
Luke Parker
d570c1d277 Move additional_key.rs to serai-processor-view-keys
I don't love this. I wanted to simply add this function to `processor/key-gen`,
but then anyone who wants a view key needs to pull in Bulletproofs which is a
mess of code. They'd also be subject to an AGPL licensed library.

This is so small it should be a primitive elsewhere, yet there is no primitives
library eligible. Maybe serai-client since that has the code to make
transactions to Serai (and will have this as a dependency)? Except then the
processor has to import serai-client when this rewrite removed it as a
dependency.
2024-09-19 23:36:32 -07:00
Luke Parker
2da24506a2 Remove vast swaths of legacy code in the processor 2024-09-19 23:36:32 -07:00
Luke Parker
6e9cb74022 Add non-transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0c1aec29bb Finish routing output flushing
Completes the transaction-chaining scheduler.
2024-09-19 23:36:32 -07:00
Luke Parker
653ead1e8c Finish the tree logic in the transaction-chaining scheduler
Also completes the DB functions, makes Scheduler never instantiated, and
ensures tree roots have change outputs.
2024-09-19 23:36:32 -07:00
Luke Parker
8ff019265f Near-complete version of the tree algorithm in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0601d47789 Work on the tree logic in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
ebef38d93b Ensure the transaction-chaining scheduler doesn't accumulate the same output multiple times 2024-09-19 23:36:32 -07:00
Luke Parker
75b4707002 Add input aggregation in the transaction-chaining scheduler
Also handles some other misc in it.
2024-09-19 23:36:32 -07:00
Luke Parker
3c787e005f Fix bug in the scanner regarding forwarded output amounts
We'd report the amount originally received, minus 2x the cost to aggregate,
regardless the amount successfully forwarded. We should've reduced to the
amount successfully forwarded, if it was smaller, in case the cost to
forward exceeded the aggregation cost.
2024-09-19 23:36:32 -07:00
Luke Parker
f11a6b4ff1 Better document the forwarded output flow 2024-09-19 23:36:32 -07:00
Luke Parker
fadc88d2ad Add scheduler-primitives
The main benefit is whatever scheduler is in use, we now have a single API to
receive TXs to sign (which is of value to the TX signer crate we'll inevitably
build).
2024-09-19 23:36:32 -07:00
Luke Parker
c88ebe985e Outline of the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
6deb60513c Expand primitives/scanner with niceties needed for the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bd277e7032 Add processor/scheduler/utxo/primitives
Includes the necessary signing functions and the fee amortization logic.

Moves transaction-chaining to utxo/transaction-chaining.
2024-09-19 23:36:32 -07:00
Luke Parker
fc765bb9e0 Add crate for the transaction-chaining Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
13b74195f7 Don't have acknowledge_batch immediately run
`acknowledge_batch` can only be run if we know what the Batch should be. If we
don't know what the Batch should be, we have to block until we do.
Specifically, we need the block number associated with the Batch.

Instead of blocking over the Scanner API, the Scanner API now solely queues
actions. A new task intakes those actions once we can. This ensures we can
intake the entire Substrate chain, even if our daemon for the external network
is stalled at its genesis block.

All of this for the block number alone seems ridiculous. To go from the block
hash in the Batch to the block number without this task, we'd at least need the
index task to be up to date (still requiring blocking or an API returning
ephemeral errors).
2024-09-19 23:36:32 -07:00
Luke Parker
f21838e0d5 Replace acknowledge_block with acknowledge_batch 2024-09-19 23:36:32 -07:00
Luke Parker
76cbe6cf1e Have acknowledge_block take in the results of the InInstructions executed
If any failed, the scanner now creates a Burn for the return.
2024-09-19 23:36:32 -07:00
Luke Parker
5999f5d65a Route the DB w.r.t. forwarded outputs' information 2024-09-19 23:36:32 -07:00
Luke Parker
d429a0bae6 Remove unused ID -> number lookup 2024-09-19 23:36:32 -07:00
Luke Parker
775824f373 Impl ScanData serialization in the DB 2024-09-19 23:36:32 -07:00
Luke Parker
41a74cb513 Check a queued key has never been queued before
Re-queueing should only happen with a malicious supermajority and breaks
indexing by the key.
2024-09-19 23:36:32 -07:00
Luke Parker
e26da1ec34 Have the Eventuality task drop outputs which aren't ours and aren't worth it to aggregate
We could drop these entirely, yet there's some degree of utility to be able to
add coins to Serai in this manner.
2024-09-19 23:36:32 -07:00
Luke Parker
7266e7f7ea Add note on why LifetimeStage is monotonic 2024-09-19 23:36:32 -07:00
Luke Parker
a8b9b7bad3 Add sanity checks we haven't prior reported an InInstruction for/accumulated an output 2024-09-19 23:36:32 -07:00
Luke Parker
2ca7fccb08 Pass the lifetime information to the scheduler
Enables it to decide which keys to use for fulfillment/change.
2024-09-19 23:36:32 -07:00
Luke Parker
4f6d91037e Call flush_key 2024-09-19 23:36:32 -07:00
Luke Parker
8db76ed67c Add key management to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
920303e1b4 Add helper to intake Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
9f4b28e5ae Clarify output-to-self to output-to-Serai
There's only the requirement it's to an active key which is being reported for.
2024-09-19 23:36:32 -07:00
Luke Parker
f9d02d43c2 Route burns through the scanner 2024-09-19 23:36:32 -07:00
Luke Parker
8ac501028d Add API to publish Batches with
This doesn't have to be abstract, we can generate the message and use the
message-queue API, yet this should help with testing.
2024-09-19 23:36:32 -07:00
Luke Parker
612c67c537 Cache the cost to aggregate 2024-09-19 23:36:32 -07:00
Luke Parker
04a971a024 Fill in various DB functions 2024-09-19 23:36:32 -07:00
Luke Parker
738636c238 Have Scanner::new spawn tasks 2024-09-19 23:36:32 -07:00
Luke Parker
65f3f48517 Add ReportDb 2024-09-19 23:36:32 -07:00
Luke Parker
7cc07d64d1 Make report.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
fdfe520f9d Add ScanDb 2024-09-19 23:36:32 -07:00
Luke Parker
77ef25416b Make scan.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7c1025dbcb Implement key retiry 2024-09-19 23:36:32 -07:00
Luke Parker
a771fbe1c6 Logs, documentation, misc 2024-09-19 23:36:32 -07:00
Luke Parker
9cebdf7c68 Add sorts for safety even upon non-determinism 2024-09-19 23:36:32 -07:00
Luke Parker
75251f04b4 Use a channel for the InInstructions
It's still unclear how we'll handle refunding failed InInstructions at this
time. Presumably, extending the InInstruction channel with the associated
output ID?
2024-09-19 23:36:32 -07:00
Luke Parker
6196642beb Add a DbChannel between scan and eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
2bddf00222 Don't expose IndexDb throughout the crate 2024-09-19 23:36:32 -07:00
Luke Parker
9ab8ba0215 Add dedicated Eventuality DB and stub missing fns 2024-09-19 23:36:32 -07:00
Luke Parker
33e0c85f34 Make Eventuality a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
1e8f4e6156 Make a dedicated IndexDb 2024-09-19 23:36:32 -07:00
Luke Parker
66f3428051 Make index a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7e71840822 Add helper methods
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be
sorted, meaning they aren't subject to implicit order/may be non-deterministic
(such as if handled by a threadpool).
2024-09-19 23:36:32 -07:00
Luke Parker
b65dbacd6a Move ContinuallyRan into primitives
I'm unsure where else it'll be used within the processor, yet it's generally
useful and I don't want to make a dedicated crate yet.
2024-09-19 23:36:32 -07:00
Luke Parker
2fcd9530dd Add a callback to accumulate outputs and return the new Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
379780a3c9 Flesh out eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
945f31dfc7 Have the scan flag blocks with change/branch/forwarded as notable 2024-09-19 23:36:32 -07:00
Luke Parker
d5d1fc3eea Flesh out report task 2024-09-19 23:36:32 -07:00
Luke Parker
fd12cc0213 Finish scan task 2024-09-19 23:36:32 -07:00
Luke Parker
ce805c8cc8 Correct compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
bc0cc5a754 Decide flow between scan/eventuality/report
Scan now only handles External outputs, with an associated essay going over
why. Scan directly creates the InInstruction (prior planned to be done in
Report), and Eventuality is declared to end up yielding the outputs.

That will require making the Eventuality flow two-stage. One stage to evaluate
existing Eventualities and yield outputs, and one stage to incorporate new
Eventualities before advancing the scan window.
2024-09-19 23:36:32 -07:00
Luke Parker
f2ee4daf43 Add Eventuality back to processor primitives
Also splits crate into modules.
2024-09-19 23:36:32 -07:00
Luke Parker
4e29678799 Add bounds for the eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
74d3075dae Document expectations on Eventuality task and correct code determining the block safe to scan/report 2024-09-19 23:36:32 -07:00
Luke Parker
155ad48f4c Handle dust 2024-09-19 23:36:32 -07:00
Luke Parker
951872b026 Differentiate BlockHeader from Block 2024-09-19 23:36:32 -07:00
Luke Parker
2b47feafed Correct misc compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
a2717d73f0 Flesh out new scanner a bit more
Adds the task to mark blocks safe to scan, and outlines the task to report
blocks.
2024-09-19 23:36:32 -07:00
Luke Parker
8763ef23ed Definition and delineation of tasks within the scanner
Also defines primitives for the processor.
2024-09-19 23:36:32 -07:00
Luke Parker
57a0ba966b Extend serai-db with support for generic keys/values 2024-09-19 23:36:32 -07:00
Luke Parker
e843b4a2a0 Move scanner.rs to scanner/lib.rs 2024-09-19 23:36:32 -07:00
Luke Parker
2f3bd7a02a Cleanup DB handling a bit in key-gen/attempt-manager 2024-09-19 23:36:32 -07:00
Luke Parker
1e8a9ec5bd Smash out the signer
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
2024-09-19 23:36:32 -07:00
Luke Parker
2f29c91d30 Smash key-gen out of processor
Resolves some bad assumptions made regarding keys being unique or not.
2024-09-19 23:36:32 -07:00
Luke Parker
f3b91bd44f Smash key-gen into independent crate 2024-09-19 23:36:32 -07:00
Luke Parker
e4e4245ee3 One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00
458 changed files with 31377 additions and 16081 deletions

View File

@@ -37,4 +37,4 @@ runs:
- name: Bitcoin Regtest Daemon
shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -daemon
run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -txindex -daemon

View File

@@ -42,8 +42,8 @@ runs:
shell: bash
run: |
cargo install svm-rs
svm install 0.8.25
svm use 0.8.25
svm install 0.8.26
svm use 0.8.26
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -30,4 +30,5 @@ jobs:
-p patchable-async-sleep \
-p serai-db \
-p serai-env \
-p serai-task \
-p simple-request

View File

@@ -35,6 +35,10 @@ jobs:
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p generalized-bulletproofs \
-p generalized-bulletproofs-circuit-abstraction \
-p ec-divisors \
-p generalized-bulletproofs-ec-gadgets \
-p dkg \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -73,6 +73,15 @@ jobs:
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
with:
version: nightly-41d4e5437107f6f42c7711123890147bc736a609
cache: false
- name: Run forge fmt
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol")
machete:
runs-on: ubuntu-latest
steps:
@@ -81,3 +90,25 @@ jobs:
run: |
cargo install cargo-machete
cargo machete
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Slither
run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol
cp networks/ethereum/schnorr/contracts/Schnorr.sol processor/ethereum/router/contracts/
cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/
cd processor/ethereum/router/contracts
slither Router.sol

255
.github/workflows/msrv.yml vendored Normal file
View File

@@ -0,0 +1,255 @@
name: Weekly MSRV Check
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
jobs:
msrv-common:
name: Run cargo msrv on common
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on common
run: |
cargo msrv verify --manifest-path common/zalloc/Cargo.toml
cargo msrv verify --manifest-path common/std-shims/Cargo.toml
cargo msrv verify --manifest-path common/env/Cargo.toml
cargo msrv verify --manifest-path common/db/Cargo.toml
cargo msrv verify --manifest-path common/task/Cargo.toml
cargo msrv verify --manifest-path common/request/Cargo.toml
cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml
msrv-crypto:
name: Run cargo msrv on crypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on crypto
run: |
cargo msrv verify --manifest-path crypto/transcript/Cargo.toml
cargo msrv verify --manifest-path crypto/ff-group-tests/Cargo.toml
cargo msrv verify --manifest-path crypto/dalek-ff-group/Cargo.toml
cargo msrv verify --manifest-path crypto/ed448/Cargo.toml
cargo msrv verify --manifest-path crypto/multiexp/Cargo.toml
cargo msrv verify --manifest-path crypto/dleq/Cargo.toml
cargo msrv verify --manifest-path crypto/ciphersuite/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorr/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/generalized-bulletproofs/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/circuit-abstraction/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/divisors/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/ec-gadgets/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/embedwards25519/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/secq256k1/Cargo.toml
cargo msrv verify --manifest-path crypto/dkg/Cargo.toml
cargo msrv verify --manifest-path crypto/frost/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorrkel/Cargo.toml
msrv-networks:
name: Run cargo msrv on networks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on networks
run: |
cargo msrv verify --manifest-path networks/bitcoin/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/build-contracts/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/schnorr/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/alloy-simple-request-transport/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/relayer/Cargo.toml --features parity-db
cargo msrv verify --manifest-path networks/monero/io/Cargo.toml
cargo msrv verify --manifest-path networks/monero/generators/Cargo.toml
cargo msrv verify --manifest-path networks/monero/primitives/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/mlsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/clsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/borromean/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/bulletproofs/Cargo.toml
cargo msrv verify --manifest-path networks/monero/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/simple-request/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/address/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/Cargo.toml
cargo msrv verify --manifest-path networks/monero/verify-chain/Cargo.toml
msrv-message-queue:
name: Run cargo msrv on message-queue
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path message-queue/Cargo.toml --features parity-db
msrv-processor:
name: Run cargo msrv on processor
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on processor
run: |
cargo msrv verify --manifest-path processor/view-keys/Cargo.toml
cargo msrv verify --manifest-path processor/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/messages/Cargo.toml
cargo msrv verify --manifest-path processor/scanner/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/smart-contract/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/standard/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/transaction-chaining/Cargo.toml
cargo msrv verify --manifest-path processor/key-gen/Cargo.toml
cargo msrv verify --manifest-path processor/frost-attempt-manager/Cargo.toml
cargo msrv verify --manifest-path processor/signers/Cargo.toml
cargo msrv verify --manifest-path processor/bin/Cargo.toml --features parity-db
cargo msrv verify --manifest-path processor/bitcoin/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/test-primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/erc20/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/deployer/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/router/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/Cargo.toml
cargo msrv verify --manifest-path processor/monero/Cargo.toml
msrv-coordinator:
name: Run cargo msrv on coordinator
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on coordinator
run: |
cargo msrv verify --manifest-path coordinator/tributary/tendermint/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml
cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml
cargo msrv verify --manifest-path coordinator/Cargo.toml
msrv-substrate:
name: Run cargo msrv on substrate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on substrate
run: |
cargo msrv verify --manifest-path substrate/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/dex/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/economic-security/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/abi/Cargo.toml
cargo msrv verify --manifest-path substrate/client/Cargo.toml
cargo msrv verify --manifest-path substrate/runtime/Cargo.toml
cargo msrv verify --manifest-path substrate/node/Cargo.toml
msrv-orchestration:
name: Run cargo msrv on orchestration
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path orchestration/Cargo.toml
msrv-mini:
name: Run cargo msrv on mini
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on mini
run: |
cargo msrv verify --manifest-path mini/Cargo.toml

View File

@@ -30,8 +30,9 @@ jobs:
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \
-p build-solidity-contracts \
-p ethereum-schnorr-contract \
-p alloy-simple-request-transport \
-p ethereum-serai \
-p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \

View File

@@ -39,9 +39,29 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \
-p serai-processor-messages \
-p serai-processor \
-p serai-processor-key-gen \
-p serai-processor-view-keys \
-p serai-processor-frost-attempt-manager \
-p serai-processor-primitives \
-p serai-processor-scanner \
-p serai-processor-scheduler-primitives \
-p serai-processor-utxo-scheduler-primitives \
-p serai-processor-utxo-scheduler \
-p serai-processor-transaction-chaining-scheduler \
-p serai-processor-smart-contract-scheduler \
-p serai-processor-signers \
-p serai-processor-bin \
-p serai-bitcoin-processor \
-p serai-processor-ethereum-primitives \
-p serai-processor-ethereum-test-primitives \
-p serai-processor-ethereum-deployer \
-p serai-processor-ethereum-router \
-p serai-processor-ethereum-erc20 \
-p serai-ethereum-processor \
-p serai-monero-processor \
-p tendermint-machine \
-p tributary-chain \
-p serai-cosign \
-p serai-coordinator \
-p serai-orchestrator \
-p serai-docker-tests

2178
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -20,6 +20,7 @@ members = [
"common/patchable-async-sleep",
"common/db",
"common/env",
"common/task",
"common/request",
"crypto/transcript",
@@ -30,17 +31,25 @@ members = [
"crypto/ciphersuite",
"crypto/multiexp",
"crypto/schnorr",
"crypto/dleq",
"crypto/evrf/secq256k1",
"crypto/evrf/embedwards25519",
"crypto/evrf/generalized-bulletproofs",
"crypto/evrf/circuit-abstraction",
"crypto/evrf/divisors",
"crypto/evrf/ec-gadgets",
"crypto/dkg",
"crypto/frost",
"crypto/schnorrkel",
"networks/bitcoin",
"networks/ethereum/build-contracts",
"networks/ethereum/schnorr",
"networks/ethereum/alloy-simple-request-transport",
"networks/ethereum",
"networks/ethereum/relayer",
"networks/monero/io",
@@ -63,10 +72,33 @@ members = [
"message-queue",
"processor/messages",
"processor",
"processor/key-gen",
"processor/view-keys",
"processor/frost-attempt-manager",
"processor/primitives",
"processor/scanner",
"processor/scheduler/primitives",
"processor/scheduler/utxo/primitives",
"processor/scheduler/utxo/standard",
"processor/scheduler/utxo/transaction-chaining",
"processor/scheduler/smart-contract",
"processor/signers",
"processor/bin",
"processor/bitcoin",
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/router",
"processor/ethereum/erc20",
"processor/ethereum",
"processor/monero",
"coordinator/tributary/tendermint",
"coordinator/tributary",
"coordinator/cosign",
"coordinator",
"substrate/primitives",
@@ -118,18 +150,32 @@ members = [
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
ff = { opt-level = 3 }
group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 }
secp256k1 = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 }
monero-serai = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
ec-divisors = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
dkg = { opt-level = 3 }
monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
[profile.release]
panic = "unwind"
@@ -158,11 +204,12 @@ matches = { path = "patches/matches" }
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# https://github.com/alloy-rs/core/issues/717
alloy-sol-type-parser = { git = "https://github.com/alloy-rs/core", rev = "446b9d2fbce12b88456152170709a3eaac929af0" }
# The official pasta_curves repo doesn't support Zeroize
pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" }
[workspace.lints.clippy]
unwrap_or_default = "allow"
map_unwrap_or = "allow"
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
@@ -190,7 +237,6 @@ manual_instant_elapsed = "deny"
manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
@@ -202,6 +248,7 @@ range_plus_one = "deny"
redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
string_slice = "deny"
unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny"

View File

@@ -1,13 +1,13 @@
[package]
name = "serai-db"
version = "0.1.0"
version = "0.1.1"
description = "A simple database trait and backends for it"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.65"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
@@ -18,7 +18,7 @@ workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.21", default-features = false, features = ["zstd"], optional = true }
rocksdb = { version = "0.23", default-features = false, features = ["zstd"], optional = true }
[features]
parity-db = ["dep:parity-db"]

8
common/db/README.md Normal file
View File

@@ -0,0 +1,8 @@
# Serai DB
An inefficient, minimal abstraction around databases.
The abstraction offers `get`, `put`, and `del` with helper functions and macros
built on top. Database iteration is not offered, forcing the caller to manually
implement indexing schemes. This ensures wide compatibility across abstracted
databases.

View File

@@ -38,12 +38,21 @@ pub fn serai_db_key(
#[macro_export]
macro_rules! create_db {
($db_name: ident {
$($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)*
$(
$field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => {
$(
#[derive(Clone, Debug)]
pub(crate) struct $field_name;
impl $field_name {
pub(crate) struct $field_name$(
<$($generic_name: $generic_type),+>
)?$(
(core::marker::PhantomData<($($generic_name),+)>)
)?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key(
@@ -52,18 +61,43 @@ macro_rules! create_db {
($($arg),*).encode()
)
}
pub(crate) fn set(txn: &mut impl DbTxn $(, $arg: $arg_type)*, data: &$field_type) {
let key = $field_name::key($($arg),*);
pub(crate) fn set(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*,
data: &$field_type
) {
let key = Self::key($($arg),*);
txn.put(&key, borsh::to_vec(data).unwrap());
}
pub(crate) fn get(getter: &impl Get, $($arg: $arg_type),*) -> Option<$field_type> {
getter.get($field_name::key($($arg),*)).map(|data| {
pub(crate) fn get(
getter: &impl Get,
$($arg: $arg_type),*
) -> Option<$field_type> {
getter.get(Self::key($($arg),*)).map(|data| {
borsh::from_slice(data.as_ref()).unwrap()
})
}
// Returns a PhantomData of all generic types so if the generic was only used in the value,
// not the keys, this doesn't have unused generic types
#[allow(dead_code)]
pub(crate) fn del(txn: &mut impl DbTxn $(, $arg: $arg_type)*) {
txn.del(&$field_name::key($($arg),*))
pub(crate) fn del(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> core::marker::PhantomData<($($($generic_name),+)?)> {
txn.del(&Self::key($($arg),*));
core::marker::PhantomData
}
pub(crate) fn take(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let key = Self::key($($arg),*);
let res = txn.get(&key).map(|data| borsh::from_slice(data.as_ref()).unwrap());
if res.is_some() {
txn.del(key);
}
res
}
}
)*
@@ -73,19 +107,30 @@ macro_rules! create_db {
#[macro_export]
macro_rules! db_channel {
($db_name: ident {
$($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)*
$($field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => {
$(
create_db! {
$db_name {
$field_name: ($($arg: $arg_type,)* index: u32) -> $field_type,
$field_name: $(<$($generic_name: $generic_type),+>)?(
$($arg: $arg_type,)*
index: u32
) -> $field_type
}
}
impl $field_name {
pub(crate) fn send(txn: &mut impl DbTxn $(, $arg: $arg_type)*, value: &$field_type) {
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn send(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
, value: &$field_type
) {
// Use index 0 to store the amount of messages
let messages_sent_key = $field_name::key($($arg),*, 0);
let messages_sent_key = Self::key($($arg,)* 0);
let messages_sent = txn.get(&messages_sent_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
@@ -96,19 +141,35 @@ macro_rules! db_channel {
// at the same time
let index_to_use = messages_sent + 2;
$field_name::set(txn, $($arg),*, index_to_use, value);
Self::set(txn, $($arg,)* index_to_use, value);
}
pub(crate) fn try_recv(txn: &mut impl DbTxn $(, $arg: $arg_type)*) -> Option<$field_type> {
let messages_recvd_key = $field_name::key($($arg),*, 1);
pub(crate) fn peek(
getter: &impl Get
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = getter.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
Self::get(getter, $($arg,)* index_to_read)
}
pub(crate) fn try_recv(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = txn.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
let res = $field_name::get(txn, $($arg),*, index_to_read);
let res = Self::get(txn, $($arg,)* index_to_read);
if res.is_some() {
$field_name::del(txn, $($arg),*, index_to_read);
Self::del(txn, $($arg,)* index_to_read);
txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes());
}
res

View File

@@ -14,26 +14,43 @@ mod parity_db;
#[cfg(feature = "parity-db")]
pub use parity_db::{ParityDb, new_parity_db};
/// An object implementing get.
/// An object implementing `get`.
pub trait Get {
/// Get a value from the database.
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
}
/// An atomic database operation.
/// An atomic database transaction.
///
/// A transaction is only required to atomically commit. It is not required that two `Get` calls
/// made with the same transaction return the same result, if another transaction wrote to that
/// key.
///
/// If two transactions are created, and both write (including deletions) to the same key, behavior
/// is undefined. The transaction may block, deadlock, panic, overwrite one of the two values
/// randomly, or any other action, at time of write or at time of commit.
#[must_use]
pub trait DbTxn: Send + Get {
/// Write a value to this key.
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>);
/// Delete the value from this key.
fn del(&mut self, key: impl AsRef<[u8]>);
/// Commit this transaction.
fn commit(self);
}
/// A database supporting atomic operations.
/// A database supporting atomic transaction.
pub trait Db: 'static + Send + Sync + Clone + Get {
/// The type representing a database transaction.
type Transaction<'a>: DbTxn;
/// Calculate a key for a database entry.
///
/// Keys are separated by the database, the item within the database, and the item's key itself.
fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
let db_len = u8::try_from(db_dst.len()).unwrap();
let dst_len = u8::try_from(item_dst.len()).unwrap();
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat()
}
/// Open a new transaction.
fn txn(&mut self) -> Self::Transaction<'_>;
}

View File

@@ -11,7 +11,7 @@ use crate::*;
#[derive(PartialEq, Eq, Debug)]
pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>);
impl<'a> Get for MemDbTxn<'a> {
impl Get for MemDbTxn<'_> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
if self.2.contains(key.as_ref()) {
return None;
@@ -23,7 +23,7 @@ impl<'a> Get for MemDbTxn<'a> {
.or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned())
}
}
impl<'a> DbTxn for MemDbTxn<'a> {
impl DbTxn for MemDbTxn<'_> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.2.remove(key.as_ref());
self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec());

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/env"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.60"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true

View File

@@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-a
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-requ
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021"
rust-version = "1.64"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021"
rust-version = "1.70"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
@@ -18,7 +18,7 @@ workspace = true
[dependencies]
spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] }
hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] }
hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] }
[features]
std = []

22
common/task/Cargo.toml Normal file
View File

@@ -0,0 +1,22 @@
[package]
name = "serai-task"
version = "0.1.0"
description = "A task schema for Serai services"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/common/task"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.75"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
log = { version = "0.4", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["macros", "sync", "time"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2022-2024 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

3
common/task/README.md Normal file
View File

@@ -0,0 +1,3 @@
# Task
A schema to define tasks to be run ad infinitum.

159
common/task/src/lib.rs Normal file
View File

@@ -0,0 +1,159 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{future::Future, time::Duration};
use std::sync::Arc;
use tokio::sync::{mpsc, oneshot, Mutex};
enum Closed {
NotClosed(Option<oneshot::Receiver<()>>),
Closed,
}
/// A handle for a task.
#[derive(Clone)]
pub struct TaskHandle {
run_now: mpsc::Sender<()>,
close: mpsc::Sender<()>,
closed: Arc<Mutex<Closed>>,
}
/// A task's internal structures.
pub struct Task {
run_now: mpsc::Receiver<()>,
close: mpsc::Receiver<()>,
closed: oneshot::Sender<()>,
}
impl Task {
/// Create a new task definition.
pub fn new() -> (Self, TaskHandle) {
// Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as
// soon as possible
let (run_now_send, run_now_recv) = mpsc::channel(1);
// And any call to close satisfies all calls to close
let (close_send, close_recv) = mpsc::channel(1);
let (closed_send, closed_recv) = oneshot::channel();
(
Self { run_now: run_now_recv, close: close_recv, closed: closed_send },
TaskHandle {
run_now: run_now_send,
close: close_send,
closed: Arc::new(Mutex::new(Closed::NotClosed(Some(closed_recv)))),
},
)
}
}
impl TaskHandle {
/// Tell the task to run now (and not whenever its next iteration on a timer is).
///
/// Panics if the task has been dropped.
pub fn run_now(&self) {
#[allow(clippy::match_same_arms)]
match self.run_now.try_send(()) {
Ok(()) => {}
// NOP on full, as this task will already be ran as soon as possible
Err(mpsc::error::TrySendError::Full(())) => {}
Err(mpsc::error::TrySendError::Closed(())) => {
panic!("task was unexpectedly closed when calling run_now")
}
}
}
/// Close the task.
///
/// Returns once the task shuts down after it finishes its current iteration (which may be of
/// unbounded time).
pub async fn close(self) {
// If another instance of the handle called tfhis, don't error
let _ = self.close.send(()).await;
// Wait until we receive the closed message
let mut closed = self.closed.lock().await;
match &mut *closed {
Closed::NotClosed(ref mut recv) => {
assert_eq!(recv.take().unwrap().await, Ok(()), "continually ran task dropped itself?");
*closed = Closed::Closed;
}
Closed::Closed => {}
}
}
}
/// A task to be continually ran.
pub trait ContinuallyRan: Sized + Send {
/// The amount of seconds before this task should be polled again.
const DELAY_BETWEEN_ITERATIONS: u64 = 5;
/// The maximum amount of seconds before this task should be run again.
///
/// Upon error, the amount of time waited will be linearly increased until this limit.
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 120;
/// Run an iteration of the task.
///
/// If this returns `true`, all dependents of the task will immediately have a new iteration ran
/// (without waiting for whatever timer they were already on).
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>>;
/// Continually run the task.
fn continually_run(
mut self,
mut task: Task,
dependents: Vec<TaskHandle>,
) -> impl Send + Future<Output = ()> {
async move {
// The default number of seconds to sleep before running the task again
let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS;
// The current number of seconds to sleep before running the task again
// We increment this upon errors in order to not flood the logs with errors
let mut current_sleep_before_next_task = default_sleep_before_next_task;
let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| {
let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task;
// Set a limit of sleeping for two minutes
*current_sleep_before_next_task = new_sleep.max(Self::MAX_DELAY_BETWEEN_ITERATIONS);
};
loop {
// If we were told to close/all handles were dropped, drop it
{
let should_close = task.close.try_recv();
match should_close {
Ok(()) | Err(mpsc::error::TryRecvError::Disconnected) => break,
Err(mpsc::error::TryRecvError::Empty) => {}
}
}
match self.run_iteration().await {
Ok(run_dependents) => {
// Upon a successful (error-free) loop iteration, reset the amount of time we sleep
current_sleep_before_next_task = default_sleep_before_next_task;
if run_dependents {
for dependent in &dependents {
dependent.run_now();
}
}
}
Err(e) => {
log::warn!("{}", e);
increase_sleep_before_next_task(&mut current_sleep_before_next_task);
}
}
// Don't run the task again for another few seconds UNLESS told to run now
tokio::select! {
() = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {},
msg = task.run_now.recv() => {
// Check if this is firing because the handle was dropped
if msg.is_none() {
break;
}
},
}
}
task.closed.send(()).unwrap();
}
}
}

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.77.0"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true

View File

@@ -8,6 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
@@ -20,6 +21,7 @@ workspace = true
async-trait = { version = "0.1", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["std"] }
bitvec = { version = "1", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }

View File

@@ -0,0 +1,36 @@
[package]
name = "serai-cosign"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[package.metadata.cargo-machete]
ignored = ["scale"]
[lints]
workspace = true
[dependencies]
blake2 = { version = "0.10", default-features = false, features = ["std"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
log = { version = "0.4", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = [] }
serai-db = { path = "../../common/db" }
serai-task = { path = "../../common/task" }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2023-2024 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -0,0 +1,121 @@
# Serai Cosign
The Serai blockchain is controlled by a set of validators referred to as the
Serai validators. These validators could attempt to double-spend, even if every
node on the network is a full node, via equivocating.
Posit:
- The Serai validators control X SRI
- The Serai validators produce block A swapping X SRI to Y XYZ
- The Serai validators produce block B swapping X SRI to Z ABC
- The Serai validators finalize block A and send to the validators for XYZ
- The Serai validators finalize block B and send to the validators for ABC
This is solved via the cosigning protocol. The validators for XYZ and the
validators for ABC each sign their view of the Serai blockchain, communicating
amongst each other to ensure consistency.
The security of the cosigning protocol is not formally proven, and there are no
claims it achieves Byzantine Fault Tolerance. This protocol is meant to be
practical and make such attacks infeasible, when they could already be argued
difficult to perform.
### Definitions
- Cosign: A signature from a non-Serai validator set for a Serai block
- Cosign Commit: A collection of cosigns which achieve the necessary weight
### Methodology
Finalized blocks from the Serai network are intended to be cosigned if they
contain burn events. Only once cosigned should non-Serai validators process
them.
Cosigning occurs by a non-Serai validator set, using their threshold keys
declared on the Serai blockchain. Once 83% of non-Serai validator sets, by
weight, cosign a block, a cosign commit is formed. A cosign commit for a block
is considered to also cosign for all blocks preceding it.
### Bounds Under Asynchrony
Assuming an asynchronous environment fully controlled by the adversary, 34% of
a validator set may cause an equivocation. Control of 67% of non-Serai
validator sets, by weight, is sufficient to produce two distinct cosign commits
at the same position. This is due to the honest stake, 33%, being split across
the two candidates (67% + 16.5% = 83.5%, just over the threshold). This means
the cosigning protocol may produce multiple cosign commits if 34% of 67%, just
22.78%, of the non-Serai validator sets, is malicious. This would be in
conjunction with 34% of the Serai validator set (assumed 20% of total stake),
for a total stake requirement of 34% of 20% + 22.78% of 80% (25.024%). This is
an increase from the 6.8% required without the cosigning protocol.
### Bounds Under Synchrony
Assuming the honest stake within the non-Serai validator sets detect the
malicious stake within their set prior to assisting in producing a cosign for
their set, for which there is a multi-second window, 67% of 67% of non-Serai
validator sets is required to produce cosigns for those sets. This raises the
total stake requirement to 42.712% (past the usual 34% threshold).
### Behavior Reliant on Synchrony
If the Serai blockchain node detects an equivocation, it will stop responding
to all RPC requests and stop participating in finalizing further blocks. This
lets the node communicate the equivocating commits to other nodes (causing them
to exhibit the same behavior), yet prevents interaction with it.
If cosigns representing 17% of the non-Serai validators sets by weight are
detected for distinct blocks at the same position, the protocol halts. An
explicit latency period of seventy seconds is enacted after receiving a cosign
commit for the detection of such an equivocation. This is largely redundant
given how the Serai blockchain node will presumably have halted itself by this
time.
### Equivocation-Detection Avoidance
Malicious Serai validators could avoid detection of their equivocating if they
produced two distinct blockchains, A and B, with different keys declared for
the same non-Serai validator set. While the validators following A may detect
the cosigns for distinct blocks by validators following B, the cosigns would be
assumed invalid due to their signatures being verified against distinct keys.
This is prevented by requiring cosigns on the blocks which declare new keys,
ensuring all validators have a consistent view of the keys used within the
cosigning protocol (per the bounds of the cosigning protocol). These blocks are
exempt from the general policy of cosign commits cosigning all prior blocks,
preventing the newly declared keys (which aren't yet cosigned) from being used
to cosign themselves. These cosigns are flagged as "notable", are permanently
archived, and must be synced before a validator will move forward.
Cosigning the block which declares new keys also ensures agreement on the
preceding block which declared the new set, with an exact specification of the
participants and their weight, before it impacts the cosigning protocol.
### Denial of Service Concerns
Any historical Serai validator set may trigger a chain halt by producing an
equivocation after their retiry. This requires 67% to be malicious. 34% of the
active Serai validator set may also trigger a chain halt.
17% of non-Serai validator sets equivocating causing a halt means 5.67% of
non-Serai validator sets' stake may cause a halt (in an asynchronous
environment fully controlled by the adversary). In a synchronous environment
where the honest stake cannot be split across two candidates, 11.33% of
non-Serai validator sets' stake is required.
The more practical attack is for one to obtain 5.67% of non-Serai validator
sets' stake, under any network conditions, and simply go offline. This will
take 17% of validator sets offline with it, preventing any cosign commits
from being performed. A fallback protocol where validators individually produce
cosigns, removing the network's horizontal scalability but ensuring liveness,
prevents this, restoring the additional requirements for control of an
asynchronous network or 11.33% of non-Serai validator sets' stake.
### TODO
The Serai node no longer responding to RPC requests upon detecting any
equivocation, and the fallback protocol where validators individually produce
signatures, are not implemented at this time. The former means the detection of
equivocating cosigns is not redundant and the latter makes 5.67% of non-Serai
validator sets' stake the DoS threshold, even without control of an
asynchronous network.

View File

@@ -0,0 +1,55 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_db::*;
use serai_task::ContinuallyRan;
use crate::evaluator::CosignedBlocks;
/// How often callers should broadcast the cosigns flagged for rebroadcasting.
pub const BROADCAST_FREQUENCY: Duration = Duration::from_secs(60);
const SYNCHRONY_EXPECTATION: Duration = Duration::from_secs(10);
const ACKNOWLEDGEMENT_DELAY: Duration =
Duration::from_secs(BROADCAST_FREQUENCY.as_secs() + SYNCHRONY_EXPECTATION.as_secs());
create_db!(
SubstrateCosignDelay {
// The latest cosigned block number.
LatestCosignedBlockNumber: () -> u64,
}
);
/// A task to delay acknowledgement of cosigns.
pub(crate) struct CosignDelayTask<D: Db> {
pub(crate) db: D,
}
impl<D: Db> ContinuallyRan for CosignDelayTask<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
async move {
let mut made_progress = false;
loop {
let mut txn = self.db.txn();
// Receive the next block to mark as cosigned
let Some((block_number, time_evaluated)) = CosignedBlocks::try_recv(&mut txn) else {
break;
};
// Calculate when we should mark it as valid
let time_valid =
SystemTime::UNIX_EPOCH + Duration::from_secs(time_evaluated) + ACKNOWLEDGEMENT_DELAY;
// Sleep until then
tokio::time::sleep(SystemTime::now().duration_since(time_valid).unwrap_or(Duration::ZERO))
.await;
// Set the cosigned block
LatestCosignedBlockNumber::set(&mut txn, &block_number);
txn.commit();
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -0,0 +1,222 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_db::*;
use serai_task::ContinuallyRan;
use crate::{
HasEvents, GlobalSession, NetworksLatestCosignedBlock, RequestNotableCosigns,
intend::{GlobalSessionsChannel, BlockEventData, BlockEvents},
};
create_db!(
SubstrateCosignEvaluator {
// The global session currently being evaluated.
CurrentlyEvaluatedGlobalSession: () -> ([u8; 32], GlobalSession),
}
);
db_channel!(
SubstrateCosignEvaluatorChannels {
// (cosigned block, time cosign was evaluated)
CosignedBlocks: () -> (u64, u64),
}
);
// This is a strict function which won't panic, even with a malicious Serai node, so long as:
// - It's called incrementally
// - It's only called for block numbers we've completed indexing on within the intend task
// - It's only called for block numbers after a global session has started
// - The global sessions channel is populated as the block declaring the session is indexed
// Which all hold true within the context of this task and the intend task.
//
// This function will also ensure the currently evaluated global session is incremented once we
// finish evaluation of the prior session.
fn currently_evaluated_global_session_strict(
txn: &mut impl DbTxn,
block_number: u64,
) -> ([u8; 32], GlobalSession) {
let mut res = {
let existing = match CurrentlyEvaluatedGlobalSession::get(txn) {
Some(existing) => existing,
None => {
let first = GlobalSessionsChannel::try_recv(txn)
.expect("fetching latest global session yet none declared");
CurrentlyEvaluatedGlobalSession::set(txn, &first);
first
}
};
assert!(
existing.1.start_block_number <= block_number,
"candidate's start block number exceeds our block number"
);
existing
};
if let Some(next) = GlobalSessionsChannel::peek(txn) {
assert!(
block_number <= next.1.start_block_number,
"currently_evaluated_global_session_strict wasn't called incrementally"
);
// If it's time for this session to activate, take it from the channel and set it
if block_number == next.1.start_block_number {
GlobalSessionsChannel::try_recv(txn).unwrap();
CurrentlyEvaluatedGlobalSession::set(txn, &next);
res = next;
}
}
res
}
/// A task to determine if a block has been cosigned and we should handle it.
pub(crate) struct CosignEvaluatorTask<D: Db, R: RequestNotableCosigns> {
pub(crate) db: D,
pub(crate) request: R,
}
impl<D: Db, R: RequestNotableCosigns> ContinuallyRan for CosignEvaluatorTask<D, R> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
async move {
let mut known_cosign = None;
let mut made_progress = false;
loop {
let mut txn = self.db.txn();
let Some(BlockEventData { block_number, has_events }) = BlockEvents::try_recv(&mut txn)
else {
break;
};
match has_events {
// Because this had notable events, we require an explicit cosign for this block by a
// supermajority of the prior block's validator sets
HasEvents::Notable => {
let (global_session, global_session_info) =
currently_evaluated_global_session_strict(&mut txn, block_number);
let mut weight_cosigned = 0;
for set in global_session_info.sets {
// Check if we have the cosign from this set
if NetworksLatestCosignedBlock::get(&txn, global_session, set.network)
.map(|signed_cosign| signed_cosign.cosign.block_number) ==
Some(block_number)
{
// Since have this cosign, add the set's weight to the weight which has cosigned
weight_cosigned +=
global_session_info.stakes.get(&set.network).ok_or_else(|| {
"ValidatorSet in global session yet didn't have its stake".to_string()
})?;
}
}
// Check if the sum weight doesn't cross the required threshold
if weight_cosigned < (((global_session_info.total_stake * 83) / 100) + 1) {
// Request the necessary cosigns over the network
// TODO: Add a timer to ensure this isn't called too often
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
// We return an error so the delay before this task is run again increases
return Err(format!(
"notable block (#{block_number}) wasn't yet cosigned. this should resolve shortly",
));
}
}
// Since this block didn't have any notable events, we simply require a cosign for this
// block or a greater block by the current validator sets
HasEvents::NonNotable => {
// Check if this was satisfied by a cached result which wasn't calculated incrementally
let known_cosigned = if let Some(known_cosign) = known_cosign {
known_cosign >= block_number
} else {
// Clear `known_cosign` which is no longer helpful
known_cosign = None;
false
};
// If it isn't already known to be cosigned, evaluate the latest cosigns
if !known_cosigned {
/*
LatestCosign is populated with the latest cosigns for each network which don't
exceed the latest global session we've evaluated the start of. This current block
is during the latest global session we've evaluated the start of.
*/
// Get the global session for this block
let (global_session, global_session_info) =
currently_evaluated_global_session_strict(&mut txn, block_number);
let mut weight_cosigned = 0;
let mut lowest_common_block: Option<u64> = None;
for set in global_session_info.sets {
// Check if this set cosigned this block or not
let Some(cosign) =
NetworksLatestCosignedBlock::get(&txn, global_session, set.network)
else {
continue;
};
if cosign.cosign.block_number >= block_number {
weight_cosigned +=
global_session_info.stakes.get(&set.network).ok_or_else(|| {
"ValidatorSet in global session yet didn't have its stake".to_string()
})?;
}
// Update the lowest block common to all of these cosigns
lowest_common_block = lowest_common_block
.map(|existing| existing.min(cosign.cosign.block_number))
.or(Some(cosign.cosign.block_number));
}
// Check if the sum weight doesn't cross the required threshold
if weight_cosigned < (((global_session_info.total_stake * 83) / 100) + 1) {
// Request the superseding notable cosigns over the network
// If this session hasn't yet produced notable cosigns, then we presume we'll see
// the desired non-notable cosigns as part of normal operations, without needing to
// explicitly request them
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
// We return an error so the delay before this task is run again increases
return Err(format!(
"block (#{block_number}) wasn't yet cosigned. this should resolve shortly",
));
}
// Update the cached result for the block we know is cosigned
/*
There may be a higher block which was cosigned, but once we get to this block,
we'll re-evaluate and find it then. The alternative would be an optimistic
re-evaluation now. Both are fine, so the lower-complexity option is preferred.
*/
known_cosign = lowest_common_block;
}
}
// If this block has no events necessitating cosigning, we can immediately consider the
// block cosigned (making this block a NOP)
HasEvents::No => {}
}
// Since we checked we had the necessary cosigns, send it for delay before acknowledgement
CosignedBlocks::send(
&mut txn,
&(
block_number,
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap_or(Duration::ZERO)
.as_secs(),
),
);
txn.commit();
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -0,0 +1,181 @@
use core::future::Future;
use std::collections::HashMap;
use serai_client::{
primitives::{SeraiAddress, Amount},
validator_sets::primitives::ValidatorSet,
Serai,
};
use serai_db::*;
use serai_task::ContinuallyRan;
use crate::*;
create_db!(
CosignIntend {
ScanCosignFrom: () -> u64,
}
);
#[derive(Debug, BorshSerialize, BorshDeserialize)]
pub(crate) struct BlockEventData {
pub(crate) block_number: u64,
pub(crate) has_events: HasEvents,
}
db_channel! {
CosignIntendChannels {
GlobalSessionsChannel: () -> ([u8; 32], GlobalSession),
BlockEvents: () -> BlockEventData,
IntendedCosigns: (set: ValidatorSet) -> CosignIntent,
}
}
async fn block_has_events_justifying_a_cosign(
serai: &Serai,
block_number: u64,
) -> Result<(Block, HasEvents), String> {
let block = serai
.finalized_block_by_number(block_number)
.await
.map_err(|e| format!("{e:?}"))?
.ok_or_else(|| "couldn't get block which should've been finalized".to_string())?;
let serai = serai.as_of(block.hash());
if !serai.validator_sets().key_gen_events().await.map_err(|e| format!("{e:?}"))?.is_empty() {
return Ok((block, HasEvents::Notable));
}
if !serai.coins().burn_with_instruction_events().await.map_err(|e| format!("{e:?}"))?.is_empty() {
return Ok((block, HasEvents::NonNotable));
}
Ok((block, HasEvents::No))
}
/// A task to determine which blocks we should intend to cosign.
pub(crate) struct CosignIntendTask<D: Db> {
pub(crate) db: D,
pub(crate) serai: Serai,
}
impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
async move {
let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1);
let latest_block_number =
self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number();
for block_number in start_block_number ..= latest_block_number {
let mut txn = self.db.txn();
let (block, mut has_events) =
block_has_events_justifying_a_cosign(&self.serai, block_number)
.await
.map_err(|e| format!("{e:?}"))?;
// Check we are indexing a linear chain
if (block_number > 1) &&
(<[u8; 32]>::from(block.header.parent_hash) !=
SubstrateBlocks::get(&txn, block_number - 1)
.expect("indexing a block but haven't indexed its parent"))
{
Err(format!(
"node's block #{block_number} doesn't build upon the block #{} prior indexed",
block_number - 1
))?;
}
SubstrateBlocks::set(&mut txn, block_number, &block.hash());
let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn);
// If this is notable, it creates a new global session, which we index into the database
// now
if has_events == HasEvents::Notable {
let serai = self.serai.as_of(block.hash());
let sets_and_keys = cosigning_sets(&serai).await?;
let global_session =
GlobalSession::id(sets_and_keys.iter().map(|(set, _key)| *set).collect());
let mut sets = Vec::with_capacity(sets_and_keys.len());
let mut keys = HashMap::with_capacity(sets_and_keys.len());
let mut stakes = HashMap::with_capacity(sets_and_keys.len());
let mut total_stake = 0;
for (set, key) in &sets_and_keys {
sets.push(*set);
keys.insert(set.network, SeraiAddress::from(*key));
let stake = serai
.validator_sets()
.total_allocated_stake(set.network)
.await
.map_err(|e| format!("{e:?}"))?
.unwrap_or(Amount(0))
.0;
stakes.insert(set.network, stake);
total_stake += stake;
}
if total_stake == 0 {
Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?;
}
let global_session_info = GlobalSession {
// This session starts cosigning after this block, as this block must be cosigned by
// the existing validators
start_block_number: block_number + 1,
sets,
keys,
stakes,
total_stake,
};
GlobalSessions::set(&mut txn, global_session, &global_session_info);
if let Some(ending_global_session) = global_session_for_this_block {
GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number);
}
LatestGlobalSessionIntended::set(&mut txn, &global_session);
GlobalSessionsChannel::send(&mut txn, &(global_session, global_session_info));
}
// If there isn't anyone available to cosign this block, meaning it'll never be cosigned,
// we flag it as not having any events requiring cosigning so we don't attempt to
// sign/require a cosign for it
if global_session_for_this_block.is_none() {
has_events = HasEvents::No;
}
match has_events {
HasEvents::Notable | HasEvents::NonNotable => {
let global_session_for_this_block = global_session_for_this_block
.expect("global session for this block was None but still attempting to cosign it");
let global_session_info = GlobalSessions::get(&txn, global_session_for_this_block)
.expect("last global session intended wasn't saved to the database");
// Tell each set of their expectation to cosign this block
for set in global_session_info.sets {
log::debug!("{:?} will be cosigning block #{block_number}", set);
IntendedCosigns::send(
&mut txn,
set,
&CosignIntent {
global_session: global_session_for_this_block,
block_number,
block_hash: block.hash(),
notable: has_events == HasEvents::Notable,
},
);
}
}
HasEvents::No => {}
}
// Populate a singular feed with every block's status for the evluator to work off of
BlockEvents::send(&mut txn, &(BlockEventData { block_number, has_events }));
// Mark this block as handled, meaning we should scan from the next block moving on
ScanCosignFrom::set(&mut txn, &(block_number + 1));
txn.commit();
}
Ok(start_block_number <= latest_block_number)
}
}
}

View File

@@ -0,0 +1,425 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{fmt::Debug, future::Future};
use std::collections::HashMap;
use blake2::{Digest, Blake2s256};
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
primitives::{NetworkId, SeraiAddress},
validator_sets::primitives::{Session, ValidatorSet, KeyPair},
Public, Block, Serai, TemporalSerai,
};
use serai_db::*;
use serai_task::*;
/// The cosigns which are intended to be performed.
mod intend;
/// The evaluator of the cosigns.
mod evaluator;
/// The task to delay acknowledgement of the cosigns.
mod delay;
pub use delay::BROADCAST_FREQUENCY;
use delay::LatestCosignedBlockNumber;
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign";
/// A 'global session', defined as all validator sets used for cosigning at a given moment.
///
/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign
/// distinct blocks at distinct positions within a global session, we still identify the faults.
/*
There is the attack where a validator set is given an alternate blockchain with a key generation
event at block #n, while most validator sets are given a blockchain with a key generation event
at block number #(n+1). This prevents whoever has the alternate blockchain from verifying the
cosigns on the primary blockchain, and detecting the faults, if they use the keys as of the block
prior to the block being cosigned.
We solve this by binding cosigns to a global session ID, which has a specific start block, and
reading the keys from the start block. This means that so long as all validator sets agree on the
start of a global session, they can verify all cosigns produced by that session, regardless of
how it advances. Since agreeing on the start of a global session is mandated, there's no way to
have validator sets follow two distinct global sessions without breaking the bounds of the
cosigning protocol.
*/
#[derive(Debug, BorshSerialize, BorshDeserialize)]
pub(crate) struct GlobalSession {
pub(crate) start_block_number: u64,
pub(crate) sets: Vec<ValidatorSet>,
pub(crate) keys: HashMap<NetworkId, SeraiAddress>,
pub(crate) stakes: HashMap<NetworkId, u64>,
pub(crate) total_stake: u64,
}
impl GlobalSession {
fn id(mut cosigners: Vec<ValidatorSet>) -> [u8; 32] {
cosigners.sort_by_key(|a| borsh::to_vec(a).unwrap());
Blake2s256::digest(borsh::to_vec(&cosigners).unwrap()).into()
}
}
create_db! {
Cosign {
// The following are populated by the intend task and used throughout the library
// An index of Substrate blocks
SubstrateBlocks: (block_number: u64) -> [u8; 32],
// A mapping from a global session's ID to its relevant information.
GlobalSessions: (global_session: [u8; 32]) -> GlobalSession,
// The last block to be cosigned by a global session.
GlobalSessionsLastBlock: (global_session: [u8; 32]) -> u64,
// The latest global session intended.
//
// This is distinct from the latest global session for which we've evaluated the cosigns for.
LatestGlobalSessionIntended: () -> [u8; 32],
// The following are managed by the `intake_cosign` function present in this file
// The latest cosigned block for each network.
//
// This will only be populated with cosigns predating or during the most recent global session
// to have its start cosigned.
//
// The global session changes upon a notable block, causing each global session to have exactly
// one notable block. All validator sets will explicitly produce a cosign for their notable
// block, causing the latest cosigned block for a global session to either be the global
// session's notable cosigns or the network's latest cosigns.
NetworksLatestCosignedBlock: (global_session: [u8; 32], network: NetworkId) -> SignedCosign,
// Cosigns received for blocks not locally recognized as finalized.
Faults: (global_session: [u8; 32]) -> Vec<SignedCosign>,
// The global session which faulted.
FaultedSession: () -> [u8; 32],
}
}
/// If the block has events.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
enum HasEvents {
/// The block had a notable event.
///
/// This is a special case as blocks with key gen events change the keys used for cosigning, and
/// accordingly must be cosigned before we advance past them.
Notable,
/// The block had an non-notable event justifying a cosign.
NonNotable,
/// The block didn't have an event justifying a cosign.
No,
}
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
struct CosignIntent {
/// The global session this cosign is being performed under.
global_session: [u8; 32],
/// The number of the block to cosign.
block_number: u64,
/// The hash of the block to cosign.
block_hash: [u8; 32],
/// If this cosign must be handled before further cosigns are.
notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// The actual cosigner.
pub cosigner: NetworkId,
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
fn verify_signature(&self, signer: serai_client::Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &borsh::to_vec(&self.cosign).unwrap(), &signature).is_ok()
}
}
/// Fetch the keys used for cosigning by a specific network.
async fn keys_for_network(
serai: &TemporalSerai<'_>,
network: NetworkId,
) -> Result<Option<(Session, KeyPair)>, String> {
let Some(latest_session) =
serai.validator_sets().session(network).await.map_err(|e| format!("{e:?}"))?
else {
// If this network hasn't had a session declared, move on
return Ok(None);
};
// Get the keys for the latest session
if let Some(keys) = serai
.validator_sets()
.keys(ValidatorSet { network, session: latest_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((latest_session, keys)));
}
// If the latest session has yet to set keys, use the prior session
if let Some(prior_session) = latest_session.0.checked_sub(1).map(Session) {
if let Some(keys) = serai
.validator_sets()
.keys(ValidatorSet { network, session: prior_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((prior_session, keys)));
}
}
Ok(None)
}
/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning as of this block.
async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result<Vec<(ValidatorSet, Public)>, String> {
let mut sets = Vec::with_capacity(serai_client::primitives::NETWORKS.len());
for network in serai_client::primitives::NETWORKS {
let Some((session, keys)) = keys_for_network(serai, network).await? else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ValidatorSet { network, session }, keys.0));
}
Ok(sets)
}
/// An object usable to request notable cosigns for a block.
pub trait RequestNotableCosigns: 'static + Send {
/// The error type which may be encountered when requesting notable cosigns.
type Error: Debug;
/// Request the notable cosigns for this global session.
fn request_notable_cosigns(
&self,
global_session: [u8; 32],
) -> impl Send + Future<Output = Result<(), Self::Error>>;
}
/// An error used to indicate the cosigning protocol has faulted.
pub struct Faulted;
/// The interface to manage cosigning with.
pub struct Cosigning<D: Db> {
db: D,
}
impl<D: Db> Cosigning<D> {
/// Spawn the tasks to intend and evaluate cosigns.
///
/// The database specified must only be used with a singular instance of the Serai network, and
/// only used once at any given time.
pub fn spawn<R: RequestNotableCosigns>(
db: D,
serai: Serai,
request: R,
tasks_to_run_upon_cosigning: Vec<TaskHandle>,
) -> Self {
let (intend_task, _intend_task_handle) = Task::new();
let (evaluator_task, evaluator_task_handle) = Task::new();
let (delay_task, delay_task_handle) = Task::new();
tokio::spawn(
(intend::CosignIntendTask { db: db.clone(), serai })
.continually_run(intend_task, vec![evaluator_task_handle]),
);
tokio::spawn(
(evaluator::CosignEvaluatorTask { db: db.clone(), request })
.continually_run(evaluator_task, vec![delay_task_handle]),
);
tokio::spawn(
(delay::CosignDelayTask { db: db.clone() })
.continually_run(delay_task, tasks_to_run_upon_cosigning),
);
Self { db }
}
/// The latest cosigned block number.
pub fn latest_cosigned_block_number(&self) -> Result<u64, Faulted> {
if FaultedSession::get(&self.db).is_some() {
Err(Faulted)?;
}
Ok(LatestCosignedBlockNumber::get(&self.db).unwrap_or(0))
}
/// Fetch the notable cosigns for a global session in order to respond to requests.
///
/// If this global session hasn't produced any notable cosigns, this will return the latest
/// cosigns for this session.
pub fn notable_cosigns(&self, global_session: [u8; 32]) -> Vec<SignedCosign> {
let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len());
for network in serai_client::primitives::NETWORKS {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) {
cosigns.push(cosign);
}
}
cosigns
}
/// The cosigns to rebroadcast every `BROADCAST_FREQUENCY` seconds.
///
/// This will be the most recent cosigns, in case the initial broadcast failed, or the faulty
/// cosigns, in case of a fault, to induce identification of the fault by others.
pub fn cosigns_to_rebroadcast(&self) -> Vec<SignedCosign> {
if let Some(faulted) = FaultedSession::get(&self.db) {
let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults");
// Also include all of our recognized-as-honest cosigns in an attempt to induce fault
// identification in those who see the faulty cosigns as honest
for network in serai_client::primitives::NETWORKS {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) {
if cosign.cosign.global_session == faulted {
cosigns.push(cosign);
}
}
}
cosigns
} else {
let Some(latest_global_session) = LatestGlobalSessionIntended::get(&self.db) else {
return vec![];
};
let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len());
for network in serai_client::primitives::NETWORKS {
if let Some(cosign) =
NetworksLatestCosignedBlock::get(&self.db, latest_global_session, network)
{
cosigns.push(cosign);
}
}
cosigns
}
}
/// Intake a cosign from the Serai network.
///
/// - Returns Err(_) if there was an error trying to validate the cosign and it should be retired
/// later.
/// - Returns Ok(true) if the cosign was successfully handled or could not be handled at this
/// time.
/// - Returns Ok(false) if the cosign was invalid.
//
// We collapse a cosign which shouldn't be handled yet into a valid cosign (`Ok(true)`) as we
// assume we'll either explicitly request it if we need it or we'll naturally see it (or a later,
// more relevant, cosign) again.
//
// Takes `&mut self` as this should only be called once at any given moment.
// TODO: Don't overload bool here
pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result<bool, String> {
let cosign = &signed_cosign.cosign;
let network = cosign.cosigner;
// Check our indexed blockchain includes a block with this block number
let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else {
return Ok(true);
};
let faulty = cosign.block_hash != our_block_hash;
// Check this isn't a dated cosign within its global session (as it would be if rebroadcasted)
if !faulty {
if let Some(existing) =
NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network)
{
if existing.cosign.block_number >= cosign.block_number {
return Ok(true);
}
}
}
let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else {
// Unrecognized global session
return Ok(true);
};
// Check the cosigned block number is in range to the global session
if cosign.block_number < global_session.start_block_number {
// Cosign is for a block predating the global session
return Ok(false);
}
if !faulty {
// This prevents a malicious validator set, on the same chain, from producing a cosign after
// their final block, replacing their notable cosign
if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) {
if cosign.block_number > last_block {
// Cosign is for a block after the last block this global session should have signed
return Ok(false);
}
}
}
// Check the cosign's signature
{
let key = Public::from({
let Some(key) = global_session.keys.get(&network) else {
return Ok(false);
};
*key
});
if !signed_cosign.verify_signature(key) {
return Ok(false);
}
}
// Since we verified this cosign's signature, and have a chain sufficiently long, handle the
// cosign
let mut txn = self.db.txn();
if !faulty {
// If this is for a future global session, we don't acknowledge this cosign at this time
let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&txn).unwrap_or(0);
// This global session starts the block *after* its declaration, so we want to check if the
// block declaring it was cosigned
if (global_session.start_block_number - 1) > latest_cosigned_block_number {
drop(txn);
return Ok(true);
}
// This is safe as it's in-range and newer, as prior checked since it isn't faulty
NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, signed_cosign);
} else {
let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]);
// Only handle this as a fault if this set wasn't prior faulty
if !faults.iter().any(|cosign| cosign.cosign.cosigner == network) {
faults.push(signed_cosign.clone());
Faults::set(&mut txn, cosign.global_session, &faults);
let mut weight_cosigned = 0;
for fault in &faults {
let Some(stake) = global_session.stakes.get(&fault.cosign.cosigner) else {
Err("cosigner with recognized key didn't have a stake entry saved".to_string())?
};
weight_cosigned += stake;
}
// Check if the sum weight means a fault has occurred
if weight_cosigned >= ((global_session.total_stake * 17) / 100) {
FaultedSession::set(&mut txn, &cosign.global_session);
}
}
}
txn.commit();
Ok(true)
}
}

View File

@@ -1,336 +0,0 @@
use core::time::Duration;
use std::{
sync::Arc,
collections::{HashSet, HashMap},
};
use tokio::{
sync::{mpsc, Mutex, RwLock},
time::sleep,
};
use borsh::BorshSerialize;
use sp_application_crypto::RuntimePublic;
use serai_client::{
primitives::{NETWORKS, NetworkId, Signature},
validator_sets::primitives::{Session, ValidatorSet},
SeraiError, TemporalSerai, Serai,
};
use serai_db::{Get, DbTxn, Db, create_db};
use processor_messages::coordinator::cosign_block_msg;
use crate::{
p2p::{CosignedBlock, GossipMessageKind, P2p},
substrate::LatestCosignedBlock,
};
create_db! {
CosignDb {
ReceivedCosign: (set: ValidatorSet, block: [u8; 32]) -> CosignedBlock,
LatestCosign: (network: NetworkId) -> CosignedBlock,
DistinctChain: (set: ValidatorSet) -> (),
}
}
pub struct CosignEvaluator<D: Db> {
db: Mutex<D>,
serai: Arc<Serai>,
stakes: RwLock<Option<HashMap<NetworkId, u64>>>,
latest_cosigns: RwLock<HashMap<NetworkId, CosignedBlock>>,
}
impl<D: Db> CosignEvaluator<D> {
async fn update_latest_cosign(&self) {
let stakes_lock = self.stakes.read().await;
// If we haven't gotten the stake data yet, return
let Some(stakes) = stakes_lock.as_ref() else { return };
let total_stake = stakes.values().copied().sum::<u64>();
let latest_cosigns = self.latest_cosigns.read().await;
let mut highest_block = 0;
for cosign in latest_cosigns.values() {
let mut networks = HashSet::new();
for (network, sub_cosign) in &*latest_cosigns {
if sub_cosign.block_number >= cosign.block_number {
networks.insert(network);
}
}
let sum_stake =
networks.into_iter().map(|network| stakes.get(network).unwrap_or(&0)).sum::<u64>();
let needed_stake = ((total_stake * 2) / 3) + 1;
if (total_stake == 0) || (sum_stake > needed_stake) {
highest_block = highest_block.max(cosign.block_number);
}
}
let mut db_lock = self.db.lock().await;
let mut txn = db_lock.txn();
if highest_block > LatestCosignedBlock::latest_cosigned_block(&txn) {
log::info!("setting latest cosigned block to {}", highest_block);
LatestCosignedBlock::set(&mut txn, &highest_block);
}
txn.commit();
}
async fn update_stakes(&self) -> Result<(), SeraiError> {
let serai = self.serai.as_of_latest_finalized_block().await?;
let mut stakes = HashMap::new();
for network in NETWORKS {
// Use if this network has published a Batch for a short-circuit of if they've ever set a key
let set_key = serai.in_instructions().last_batch_for_network(network).await?.is_some();
if set_key {
stakes.insert(
network,
serai
.validator_sets()
.total_allocated_stake(network)
.await?
.expect("network which published a batch didn't have a stake set")
.0,
);
}
}
// Since we've successfully built stakes, set it
*self.stakes.write().await = Some(stakes);
self.update_latest_cosign().await;
Ok(())
}
// Uses Err to signify a message should be retried
async fn handle_new_cosign(&self, cosign: CosignedBlock) -> Result<(), SeraiError> {
// If we already have this cosign or a newer cosign, return
if let Some(latest) = self.latest_cosigns.read().await.get(&cosign.network) {
if latest.block_number >= cosign.block_number {
return Ok(());
}
}
// If this an old cosign (older than a day), drop it
let latest_block = self.serai.latest_finalized_block().await?;
if (cosign.block_number + (24 * 60 * 60 / 6)) < latest_block.number() {
log::debug!("received old cosign supposedly signed by {:?}", cosign.network);
return Ok(());
}
let Some(block) = self.serai.finalized_block_by_number(cosign.block_number).await? else {
log::warn!("received cosign with a block number which doesn't map to a block");
return Ok(());
};
async fn set_with_keys_fn(
serai: &TemporalSerai<'_>,
network: NetworkId,
) -> Result<Option<ValidatorSet>, SeraiError> {
let Some(latest_session) = serai.validator_sets().session(network).await? else {
log::warn!("received cosign from {:?}, which doesn't yet have a session", network);
return Ok(None);
};
let prior_session = Session(latest_session.0.saturating_sub(1));
Ok(Some(
if serai
.validator_sets()
.keys(ValidatorSet { network, session: prior_session })
.await?
.is_some()
{
ValidatorSet { network, session: prior_session }
} else {
ValidatorSet { network, session: latest_session }
},
))
}
// Get the key for this network as of the prior block
// If we have two chains, this value may be different across chains depending on if one chain
// included the set_keys and one didn't
// Because set_keys will force a cosign, it will force detection of distinct blocks
// re: set_keys using keys prior to set_keys (assumed amenable to all)
let serai = self.serai.as_of(block.header.parent_hash.into());
let Some(set_with_keys) = set_with_keys_fn(&serai, cosign.network).await? else {
return Ok(());
};
let Some(keys) = serai.validator_sets().keys(set_with_keys).await? else {
log::warn!("received cosign for a block we didn't have keys for");
return Ok(());
};
if !keys
.0
.verify(&cosign_block_msg(cosign.block_number, cosign.block), &Signature(cosign.signature))
{
log::warn!("received cosigned block with an invalid signature");
return Ok(());
}
log::info!(
"received cosign for block {} ({}) by {:?}",
block.number(),
hex::encode(cosign.block),
cosign.network
);
// Save this cosign to the DB
{
let mut db = self.db.lock().await;
let mut txn = db.txn();
ReceivedCosign::set(&mut txn, set_with_keys, cosign.block, &cosign);
LatestCosign::set(&mut txn, set_with_keys.network, &(cosign));
txn.commit();
}
if cosign.block != block.hash() {
log::error!(
"received cosign for a distinct block at {}. we have {}. cosign had {}",
cosign.block_number,
hex::encode(block.hash()),
hex::encode(cosign.block)
);
let serai = self.serai.as_of(latest_block.hash());
let mut db = self.db.lock().await;
// Save this set as being on a different chain
let mut txn = db.txn();
DistinctChain::set(&mut txn, set_with_keys, &());
txn.commit();
let mut total_stake = 0;
let mut total_on_distinct_chain = 0;
for network in NETWORKS {
if network == NetworkId::Serai {
continue;
}
// Get the current set for this network
let set_with_keys = {
let mut res;
while {
res = set_with_keys_fn(&serai, cosign.network).await;
res.is_err()
} {
log::error!(
"couldn't get the set with keys when checking for a distinct chain: {:?}",
res
);
tokio::time::sleep(core::time::Duration::from_secs(3)).await;
}
res.unwrap()
};
// Get its stake
// Doesn't use the stakes inside self to prevent deadlocks re: multi-lock acquisition
if let Some(set_with_keys) = set_with_keys {
let stake = {
let mut res;
while {
res = serai.validator_sets().total_allocated_stake(set_with_keys.network).await;
res.is_err()
} {
log::error!(
"couldn't get total allocated stake when checking for a distinct chain: {:?}",
res
);
tokio::time::sleep(core::time::Duration::from_secs(3)).await;
}
res.unwrap()
};
if let Some(stake) = stake {
total_stake += stake.0;
if DistinctChain::get(&*db, set_with_keys).is_some() {
total_on_distinct_chain += stake.0;
}
}
}
}
// See https://github.com/serai-dex/serai/issues/339 for the reasoning on 17%
if (total_stake * 17 / 100) <= total_on_distinct_chain {
panic!("17% of validator sets (by stake) have co-signed a distinct chain");
}
} else {
{
let mut latest_cosigns = self.latest_cosigns.write().await;
latest_cosigns.insert(cosign.network, cosign);
}
self.update_latest_cosign().await;
}
Ok(())
}
#[allow(clippy::new_ret_no_self)]
pub fn new<P: P2p>(db: D, p2p: P, serai: Arc<Serai>) -> mpsc::UnboundedSender<CosignedBlock> {
let mut latest_cosigns = HashMap::new();
for network in NETWORKS {
if let Some(cosign) = LatestCosign::get(&db, network) {
latest_cosigns.insert(network, cosign);
}
}
let evaluator = Arc::new(Self {
db: Mutex::new(db),
serai,
stakes: RwLock::new(None),
latest_cosigns: RwLock::new(latest_cosigns),
});
// Spawn a task to update stakes regularly
tokio::spawn({
let evaluator = evaluator.clone();
async move {
loop {
// Run this until it passes
while evaluator.update_stakes().await.is_err() {
log::warn!("couldn't update stakes in the cosign evaluator");
// Try again in 10 seconds
sleep(Duration::from_secs(10)).await;
}
// Run it every 10 minutes as we don't need the exact stake data for this to be valid
sleep(Duration::from_secs(10 * 60)).await;
}
}
});
// Spawn a task to receive cosigns and handle them
let (send, mut recv) = mpsc::unbounded_channel();
tokio::spawn({
let evaluator = evaluator.clone();
async move {
while let Some(msg) = recv.recv().await {
while evaluator.handle_new_cosign(msg).await.is_err() {
// Try again in 10 seconds
sleep(Duration::from_secs(10)).await;
}
}
}
});
// Spawn a task to rebroadcast the most recent cosigns
tokio::spawn({
async move {
loop {
let cosigns = evaluator.latest_cosigns.read().await.values().copied().collect::<Vec<_>>();
for cosign in cosigns {
let mut buf = vec![];
cosign.serialize(&mut buf).unwrap();
P2p::broadcast(&p2p, GossipMessageKind::CosignedBlock, buf).await;
}
sleep(Duration::from_secs(60)).await;
}
}
});
// Return the channel to send cosigns
send
}
}

View File

@@ -16,7 +16,6 @@ use ciphersuite::{
Ciphersuite, Ristretto,
};
use schnorr::SchnorrSignature;
use frost::Participant;
use serai_db::{DbTxn, Db};
@@ -114,16 +113,17 @@ async fn add_tributary<D: Db, Pro: Processors, P: P2p>(
// If we're rebooting, we'll re-fire this message
// This is safe due to the message-queue deduplicating based off the intent system
let set = spec.set();
let our_i = spec
.i(&[], Ristretto::generator() * key.deref())
.expect("adding a tributary for a set we aren't in set for");
processors
.send(
set.network,
processor_messages::key_gen::CoordinatorMessage::GenerateKey {
id: processor_messages::key_gen::KeyGenId { session: set.session, attempt: 0 },
params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(),
shares: u16::from(our_i.end) - u16::from(our_i.start),
session: set.session,
threshold: spec.t(),
evrf_public_keys: spec.evrf_public_keys(),
// TODO
// params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(),
// shares: u16::from(our_i.end) - u16::from(our_i.start),
},
)
.await;
@@ -166,12 +166,9 @@ async fn handle_processor_message<D: Db, P: P2p>(
// We'll only receive these if we fired GenerateKey, which we'll only do if if we're
// in-set, making the Tributary relevant
ProcessorMessage::KeyGen(inner_msg) => match inner_msg {
key_gen::ProcessorMessage::Commitments { id, .. } |
key_gen::ProcessorMessage::InvalidCommitments { id, .. } |
key_gen::ProcessorMessage::Shares { id, .. } |
key_gen::ProcessorMessage::InvalidShare { id, .. } |
key_gen::ProcessorMessage::GeneratedKeyPair { id, .. } |
key_gen::ProcessorMessage::Blame { id, .. } => Some(id.session),
key_gen::ProcessorMessage::Participation { session, .. } |
key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } |
key_gen::ProcessorMessage::Blame { session, .. } => Some(*session),
},
ProcessorMessage::Sign(inner_msg) => match inner_msg {
// We'll only receive InvalidParticipant/Preprocess/Share if we're actively signing
@@ -421,125 +418,33 @@ async fn handle_processor_message<D: Db, P: P2p>(
let txs = match msg.msg.clone() {
ProcessorMessage::KeyGen(inner_msg) => match inner_msg {
key_gen::ProcessorMessage::Commitments { id, commitments } => {
vec![Transaction::DkgCommitments {
attempt: id.attempt,
commitments,
signed: Transaction::empty_signed(),
}]
key_gen::ProcessorMessage::Participation { session, participation } => {
assert_eq!(session, spec.set().session);
vec![Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() }]
}
key_gen::ProcessorMessage::InvalidCommitments { id, faulty } => {
// This doesn't have guaranteed timing
//
// While the party *should* be fatally slashed and not included in future attempts,
// they'll actually be fatally slashed (assuming liveness before the Tributary retires)
// and not included in future attempts *which begin after the latency window completes*
let participant = spec
.reverse_lookup_i(
&crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt)
.expect("participating in DKG attempt yet we didn't save who was removed"),
faulty,
)
.unwrap();
vec![Transaction::RemoveParticipantDueToDkg {
participant,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::Shares { id, mut shares } => {
// Create a MuSig-based machine to inform Substrate of this key generation
let nonces = crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, id.attempt);
let removed = crate::tributary::removed_as_of_dkg_attempt(&txn, genesis, id.attempt)
.expect("participating in a DKG attempt yet we didn't track who was removed yet?");
let our_i = spec
.i(&removed, pub_key)
.expect("processor message to DKG for an attempt we aren't a validator in");
// `tx_shares` needs to be done here as while it can be serialized from the HashMap
// without further context, it can't be deserialized without context
let mut tx_shares = Vec::with_capacity(shares.len());
for shares in &mut shares {
tx_shares.push(vec![]);
for i in 1 ..= spec.n(&removed) {
let i = Participant::new(i).unwrap();
if our_i.contains(&i) {
if shares.contains_key(&i) {
panic!("processor sent us our own shares");
}
continue;
}
tx_shares.last_mut().unwrap().push(
shares.remove(&i).expect("processor didn't send share for another validator"),
);
}
}
vec![Transaction::DkgShares {
attempt: id.attempt,
shares: tx_shares,
confirmation_nonces: nonces,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::InvalidShare { id, accuser, faulty, blame } => {
vec![Transaction::InvalidDkgShare {
attempt: id.attempt,
accuser,
faulty,
blame,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::GeneratedKeyPair { id, substrate_key, network_key } => {
// TODO2: Check the KeyGenId fields
// Tell the Tributary the key pair, get back the share for the MuSig signature
let share = crate::tributary::generated_key_pair::<D>(
key_gen::ProcessorMessage::GeneratedKeyPair { session, substrate_key, network_key } => {
assert_eq!(session, spec.set().session);
crate::tributary::generated_key_pair::<D>(
&mut txn,
key,
spec,
genesis,
&KeyPair(Public(substrate_key), network_key.try_into().unwrap()),
id.attempt,
);
// TODO: Move this into generated_key_pair?
match share {
Ok(share) => {
vec![Transaction::DkgConfirmed {
attempt: id.attempt,
confirmation_share: share,
signed: Transaction::empty_signed(),
}]
}
Err(p) => {
let participant = spec
.reverse_lookup_i(
&crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt)
.expect("participating in DKG attempt yet we didn't save who was removed"),
p,
)
.unwrap();
vec![Transaction::RemoveParticipantDueToDkg {
participant,
signed: Transaction::empty_signed(),
}]
}
}
}
key_gen::ProcessorMessage::Blame { id, participant } => {
let participant = spec
.reverse_lookup_i(
&crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt)
.expect("participating in DKG attempt yet we didn't save who was removed"),
participant,
)
.unwrap();
vec![Transaction::RemoveParticipantDueToDkg {
participant,
// Create a MuSig-based machine to inform Substrate of this key generation
let confirmation_nonces =
crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, 0);
vec![Transaction::DkgConfirmationNonces {
attempt: 0,
confirmation_nonces,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::Blame { session, participant } => {
assert_eq!(session, spec.set().session);
let participant = spec.reverse_lookup_i(participant).unwrap();
vec![Transaction::RemoveParticipant { participant, signed: Transaction::empty_signed() }]
}
},
ProcessorMessage::Sign(msg) => match msg {
sign::ProcessorMessage::InvalidParticipant { .. } => {

View File

@@ -1,332 +0,0 @@
/*
If:
A) This block has events and it's been at least X blocks since the last cosign or
B) This block doesn't have events but it's been X blocks since a skipped block which did
have events or
C) This block key gens (which changes who the cosigners are)
cosign this block.
This creates both a minimum and maximum delay of X blocks before a block's cosigning begins,
barring key gens which are exceptional. The minimum delay is there to ensure we don't constantly
spawn new protocols every 6 seconds, overwriting the old ones. The maximum delay is there to
ensure any block needing cosigned is consigned within a reasonable amount of time.
*/
use zeroize::Zeroizing;
use ciphersuite::{Ciphersuite, Ristretto};
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
SeraiError, Serai,
primitives::NetworkId,
validator_sets::primitives::{Session, ValidatorSet},
};
use serai_db::*;
use crate::{Db, substrate::in_set, tributary::SeraiBlockNumber};
// 5 minutes, expressed in blocks
// TODO: Pull a constant for block time
const COSIGN_DISTANCE: u64 = 5 * 60 / 6;
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
enum HasEvents {
KeyGen,
Yes,
No,
}
create_db!(
SubstrateCosignDb {
ScanCosignFrom: () -> u64,
IntendedCosign: () -> (u64, Option<u64>),
BlockHasEventsCache: (block: u64) -> HasEvents,
LatestCosignedBlock: () -> u64,
}
);
impl IntendedCosign {
// Sets the intended to cosign block, clearing the prior value entirely.
pub fn set_intended_cosign(txn: &mut impl DbTxn, intended: u64) {
Self::set(txn, &(intended, None::<u64>));
}
// Sets the cosign skipped since the last intended to cosign block.
pub fn set_skipped_cosign(txn: &mut impl DbTxn, skipped: u64) {
let (intended, prior_skipped) = Self::get(txn).unwrap();
assert!(prior_skipped.is_none());
Self::set(txn, &(intended, Some(skipped)));
}
}
impl LatestCosignedBlock {
pub fn latest_cosigned_block(getter: &impl Get) -> u64 {
Self::get(getter).unwrap_or_default().max(1)
}
}
db_channel! {
SubstrateDbChannels {
CosignTransactions: (network: NetworkId) -> (Session, u64, [u8; 32]),
}
}
impl CosignTransactions {
// Append a cosign transaction.
pub fn append_cosign(txn: &mut impl DbTxn, set: ValidatorSet, number: u64, hash: [u8; 32]) {
CosignTransactions::send(txn, set.network, &(set.session, number, hash))
}
}
async fn block_has_events(
txn: &mut impl DbTxn,
serai: &Serai,
block: u64,
) -> Result<HasEvents, SeraiError> {
let cached = BlockHasEventsCache::get(txn, block);
match cached {
None => {
let serai = serai.as_of(
serai
.finalized_block_by_number(block)
.await?
.expect("couldn't get block which should've been finalized")
.hash(),
);
if !serai.validator_sets().key_gen_events().await?.is_empty() {
return Ok(HasEvents::KeyGen);
}
let has_no_events = serai.coins().burn_with_instruction_events().await?.is_empty() &&
serai.in_instructions().batch_events().await?.is_empty() &&
serai.validator_sets().new_set_events().await?.is_empty() &&
serai.validator_sets().set_retired_events().await?.is_empty();
let has_events = if has_no_events { HasEvents::No } else { HasEvents::Yes };
BlockHasEventsCache::set(txn, block, &has_events);
Ok(has_events)
}
Some(code) => Ok(code),
}
}
async fn potentially_cosign_block(
txn: &mut impl DbTxn,
serai: &Serai,
block: u64,
skipped_block: Option<u64>,
window_end_exclusive: u64,
) -> Result<bool, SeraiError> {
// The following code regarding marking cosigned if prior block is cosigned expects this block to
// not be zero
// While we could perform this check there, there's no reason not to optimize the entire function
// as such
if block == 0 {
return Ok(false);
}
let block_has_events = block_has_events(txn, serai, block).await?;
// If this block had no events and immediately follows a cosigned block, mark it as cosigned
if (block_has_events == HasEvents::No) &&
(LatestCosignedBlock::latest_cosigned_block(txn) == (block - 1))
{
log::debug!("automatically co-signing next block ({block}) since it has no events");
LatestCosignedBlock::set(txn, &block);
}
// If we skipped a block, we're supposed to sign it plus the COSIGN_DISTANCE if no other blocks
// trigger a cosigning protocol covering it
// This means there will be the maximum delay allowed from a block needing cosigning occurring
// and a cosign for it triggering
let maximally_latent_cosign_block =
skipped_block.map(|skipped_block| skipped_block + COSIGN_DISTANCE);
// If this block is within the window,
if block < window_end_exclusive {
// and set a key, cosign it
if block_has_events == HasEvents::KeyGen {
IntendedCosign::set_intended_cosign(txn, block);
// Carry skipped if it isn't included by cosigning this block
if let Some(skipped) = skipped_block {
if skipped > block {
IntendedCosign::set_skipped_cosign(txn, block);
}
}
return Ok(true);
}
} else if (Some(block) == maximally_latent_cosign_block) || (block_has_events != HasEvents::No) {
// Since this block was outside the window and had events/was maximally latent, cosign it
IntendedCosign::set_intended_cosign(txn, block);
return Ok(true);
}
Ok(false)
}
/*
Advances the cosign protocol as should be done per the latest block.
A block is considered cosigned if:
A) It was cosigned
B) It's the parent of a cosigned block
C) It immediately follows a cosigned block and has no events requiring cosigning
This only actually performs advancement within a limited bound (generally until it finds a block
which should be cosigned). Accordingly, it is necessary to call multiple times even if
`latest_number` doesn't change.
*/
async fn advance_cosign_protocol_inner(
db: &mut impl Db,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
serai: &Serai,
latest_number: u64,
) -> Result<(), SeraiError> {
let mut txn = db.txn();
const INITIAL_INTENDED_COSIGN: u64 = 1;
let (last_intended_to_cosign_block, mut skipped_block) = {
let intended_cosign = IntendedCosign::get(&txn);
// If we haven't prior intended to cosign a block, set the intended cosign to 1
if let Some(intended_cosign) = intended_cosign {
intended_cosign
} else {
IntendedCosign::set_intended_cosign(&mut txn, INITIAL_INTENDED_COSIGN);
IntendedCosign::get(&txn).unwrap()
}
};
// "windows" refers to the window of blocks where even if there's a block which should be
// cosigned, it won't be due to proximity due to the prior cosign
let mut window_end_exclusive = last_intended_to_cosign_block + COSIGN_DISTANCE;
// If we've never triggered a cosign, don't skip any cosigns based on proximity
if last_intended_to_cosign_block == INITIAL_INTENDED_COSIGN {
window_end_exclusive = 1;
}
// The consensus rules for this are `last_intended_to_cosign_block + 1`
let scan_start_block = last_intended_to_cosign_block + 1;
// As a practical optimization, we don't re-scan old blocks since old blocks are independent to
// new state
let scan_start_block = scan_start_block.max(ScanCosignFrom::get(&txn).unwrap_or(1));
// Check all blocks within the window to see if they should be cosigned
// If so, we're skipping them and need to flag them as skipped so that once the window closes, we
// do cosign them
// We only perform this check if we haven't already marked a block as skipped since the cosign
// the skipped block will cause will cosign all other blocks within this window
if skipped_block.is_none() {
let window_end_inclusive = window_end_exclusive - 1;
for b in scan_start_block ..= window_end_inclusive.min(latest_number) {
if block_has_events(&mut txn, serai, b).await? == HasEvents::Yes {
skipped_block = Some(b);
log::debug!("skipping cosigning {b} due to proximity to prior cosign");
IntendedCosign::set_skipped_cosign(&mut txn, b);
break;
}
}
}
// A block which should be cosigned
let mut to_cosign = None;
// A list of sets which are cosigning, along with a boolean of if we're in the set
let mut cosigning = vec![];
for block in scan_start_block ..= latest_number {
let actual_block = serai
.finalized_block_by_number(block)
.await?
.expect("couldn't get block which should've been finalized");
// Save the block number for this block, as needed by the cosigner to perform cosigning
SeraiBlockNumber::set(&mut txn, actual_block.hash(), &block);
if potentially_cosign_block(&mut txn, serai, block, skipped_block, window_end_exclusive).await?
{
to_cosign = Some((block, actual_block.hash()));
// Get the keys as of the prior block
// If this key sets new keys, the coordinator won't acknowledge so until we process this
// block
// We won't process this block until its co-signed
// Using the keys of the prior block ensures this deadlock isn't reached
let serai = serai.as_of(actual_block.header.parent_hash.into());
for network in serai_client::primitives::NETWORKS {
// Get the latest session to have set keys
let set_with_keys = {
let Some(latest_session) = serai.validator_sets().session(network).await? else {
continue;
};
let prior_session = Session(latest_session.0.saturating_sub(1));
if serai
.validator_sets()
.keys(ValidatorSet { network, session: prior_session })
.await?
.is_some()
{
ValidatorSet { network, session: prior_session }
} else {
let set = ValidatorSet { network, session: latest_session };
if serai.validator_sets().keys(set).await?.is_none() {
continue;
}
set
}
};
log::debug!("{:?} will be cosigning {block}", set_with_keys.network);
cosigning.push((set_with_keys, in_set(key, &serai, set_with_keys).await?.unwrap()));
}
break;
}
// If this TX is committed, always start future scanning from the next block
ScanCosignFrom::set(&mut txn, &(block + 1));
// Since we're scanning *from* the next block, tidy the cache
BlockHasEventsCache::del(&mut txn, block);
}
if let Some((number, hash)) = to_cosign {
// If this block doesn't have cosigners, yet does have events, automatically mark it as
// cosigned
if cosigning.is_empty() {
log::debug!("{} had no cosigners available, marking as cosigned", number);
LatestCosignedBlock::set(&mut txn, &number);
} else {
for (set, in_set) in cosigning {
if in_set {
log::debug!("cosigning {number} with {:?} {:?}", set.network, set.session);
CosignTransactions::append_cosign(&mut txn, set, number, hash);
}
}
}
}
txn.commit();
Ok(())
}
pub async fn advance_cosign_protocol(
db: &mut impl Db,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
serai: &Serai,
latest_number: u64,
) -> Result<(), SeraiError> {
loop {
let scan_from = ScanCosignFrom::get(db).unwrap_or(1);
// Only scan 1000 blocks at a time to limit a massive txn from forming
let scan_to = latest_number.min(scan_from + 1000);
advance_cosign_protocol_inner(db, key, serai, scan_to).await?;
// If we didn't limit the scan_to, break
if scan_to == latest_number {
break;
}
}
Ok(())
}

View File

@@ -10,7 +10,7 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use serai_client::{
SeraiError, Block, Serai, TemporalSerai,
primitives::{BlockHash, NetworkId},
primitives::{BlockHash, EmbeddedEllipticCurve, NetworkId},
validator_sets::{primitives::ValidatorSet, ValidatorSetsEvent},
in_instructions::InInstructionsEvent,
coins::CoinsEvent,
@@ -60,13 +60,46 @@ async fn handle_new_set<D: Db>(
{
log::info!("present in set {:?}", set);
let set_data = {
let validators;
let mut evrf_public_keys = vec![];
{
let serai = serai.as_of(block.hash());
let serai = serai.validator_sets();
let set_participants =
serai.participants(set.network).await?.expect("NewSet for set which doesn't exist");
set_participants.into_iter().map(|(k, w)| (k, u16::try_from(w).unwrap())).collect::<Vec<_>>()
validators = set_participants
.iter()
.map(|(k, w)| {
(
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut k.0.as_ref())
.expect("invalid key registered as participant"),
u16::try_from(*w).unwrap(),
)
})
.collect::<Vec<_>>();
for (validator, _) in set_participants {
// This is only run for external networks which always do a DKG for Serai
let substrate = serai
.embedded_elliptic_curve_key(validator, EmbeddedEllipticCurve::Embedwards25519)
.await?
.expect("Serai called NewSet on a validator without an Embedwards25519 key");
// `embedded_elliptic_curves` is documented to have the second entry be the
// network-specific curve (if it exists and is distinct from Embedwards25519)
let network =
if let Some(embedded_elliptic_curve) = set.network.embedded_elliptic_curves().get(1) {
serai.embedded_elliptic_curve_key(validator, *embedded_elliptic_curve).await?.expect(
"Serai called NewSet on a validator without the embedded key required for the network",
)
} else {
substrate.clone()
};
evrf_public_keys.push((
<[u8; 32]>::try_from(substrate)
.expect("validator-sets pallet accepted a key of an invalid length"),
network,
));
}
};
let time = if let Ok(time) = block.time() {
@@ -90,7 +123,7 @@ async fn handle_new_set<D: Db>(
const SUBSTRATE_TO_TRIBUTARY_TIME_DELAY: u64 = 120;
let time = time + SUBSTRATE_TO_TRIBUTARY_TIME_DELAY;
let spec = TributarySpec::new(block.hash(), time, set, set_data);
let spec = TributarySpec::new(block.hash(), time, set, validators, evrf_public_keys);
log::info!("creating new tributary for {:?}", spec.set());

View File

@@ -7,12 +7,8 @@ use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, OsRng};
use futures_util::{task::Poll, poll};
use ciphersuite::{
group::{ff::Field, GroupEncoding},
Ciphersuite, Ristretto,
};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use sp_application_crypto::sr25519;
use borsh::BorshDeserialize;
use serai_client::{
primitives::NetworkId,
@@ -52,12 +48,22 @@ pub fn new_spec<R: RngCore + CryptoRng>(
let set = ValidatorSet { session: Session(0), network: NetworkId::Bitcoin };
let set_participants = keys
let validators = keys
.iter()
.map(|key| (sr25519::Public((<Ristretto as Ciphersuite>::generator() * **key).to_bytes()), 1))
.map(|key| ((<Ristretto as Ciphersuite>::generator() * **key), 1))
.collect::<Vec<_>>();
let res = TributarySpec::new(serai_block, start_time, set, set_participants);
// Generate random eVRF keys as none of these test rely on them to have any structure
let mut evrf_keys = vec![];
for _ in 0 .. keys.len() {
let mut substrate = [0; 32];
OsRng.fill_bytes(&mut substrate);
let mut network = vec![0; 64];
OsRng.fill_bytes(&mut network);
evrf_keys.push((substrate, network));
}
let res = TributarySpec::new(serai_block, start_time, set, validators, evrf_keys);
assert_eq!(
TributarySpec::deserialize_reader(&mut borsh::to_vec(&res).unwrap().as_slice()).unwrap(),
res,

View File

@@ -1,5 +1,4 @@
use core::time::Duration;
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
@@ -9,7 +8,7 @@ use frost::Participant;
use sp_runtime::traits::Verify;
use serai_client::{
primitives::{SeraiAddress, Signature},
primitives::Signature,
validator_sets::primitives::{ValidatorSet, KeyPair},
};
@@ -17,10 +16,7 @@ use tokio::time::sleep;
use serai_db::{Get, DbTxn, Db, MemDb};
use processor_messages::{
key_gen::{self, KeyGenId},
CoordinatorMessage,
};
use processor_messages::{key_gen, CoordinatorMessage};
use tributary::{TransactionTrait, Tributary};
@@ -54,44 +50,41 @@ async fn dkg_test() {
tokio::spawn(run_tributaries(tributaries.clone()));
let mut txs = vec![];
// Create DKG commitments for each key
// Create DKG participation for each key
for key in &keys {
let attempt = 0;
let mut commitments = vec![0; 256];
OsRng.fill_bytes(&mut commitments);
let mut participation = vec![0; 4096];
OsRng.fill_bytes(&mut participation);
let mut tx = Transaction::DkgCommitments {
attempt,
commitments: vec![commitments],
signed: Transaction::empty_signed(),
};
let mut tx =
Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() };
tx.sign(&mut OsRng, spec.genesis(), key);
txs.push(tx);
}
let block_before_tx = tributaries[0].1.tip().await;
// Publish all commitments but one
for (i, tx) in txs.iter().enumerate().skip(1) {
// Publish t-1 participations
let t = ((keys.len() * 2) / 3) + 1;
for (i, tx) in txs.iter().take(t - 1).enumerate() {
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
}
// Wait until these are included
for tx in txs.iter().skip(1) {
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
let expected_commitments: HashMap<_, _> = txs
let expected_participations = txs
.iter()
.enumerate()
.map(|(i, tx)| {
if let Transaction::DkgCommitments { commitments, .. } = tx {
(Participant::new((i + 1).try_into().unwrap()).unwrap(), commitments[0].clone())
if let Transaction::DkgParticipation { participation, .. } = tx {
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Participation {
session: spec.set().session,
participant: Participant::new((i + 1).try_into().unwrap()).unwrap(),
participation: participation.clone(),
})
} else {
panic!("txs had non-commitments");
panic!("txs wasn't a DkgParticipation");
}
})
.collect();
.collect::<Vec<_>>();
async fn new_processors(
db: &mut MemDb,
@@ -120,28 +113,30 @@ async fn dkg_test() {
processors
}
// Instantiate a scanner and verify it has nothing to report
// Instantiate a scanner and verify it has the first two participations to report (and isn't
// waiting for `t`)
let processors = new_processors(&mut dbs[0], &keys[0], &spec, &tributaries[0].1).await;
assert!(processors.0.read().await.is_empty());
assert_eq!(processors.0.read().await.get(&spec.set().network).unwrap().len(), t - 1);
// Publish the last commitment
// Publish the rest of the participations
let block_before_tx = tributaries[0].1.tip().await;
assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true));
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await;
sleep(Duration::from_secs(Tributary::<MemDb, Transaction, LocalP2p>::block_time().into())).await;
for tx in txs.iter().skip(t - 1) {
assert_eq!(tributaries[0].1.add_transaction(tx.clone()).await, Ok(true));
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
// Verify the scanner emits a KeyGen::Commitments message
// Verify the scanner emits all KeyGen::Participations messages
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[0],
&keys[0],
&|_, _, _, _| async {
panic!("provided TX caused recognized_id to be called after Commitments")
panic!("provided TX caused recognized_id to be called after DkgParticipation")
},
&processors,
&(),
&|_| async {
panic!(
"test tried to publish a new Tributary TX from handle_application_tx after Commitments"
"test tried to publish a new Tributary TX from handle_application_tx after DkgParticipation"
)
},
&spec,
@@ -150,17 +145,11 @@ async fn dkg_test() {
.await;
{
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
let mut expected_commitments = expected_commitments.clone();
expected_commitments.remove(&Participant::new((1).try_into().unwrap()).unwrap());
assert_eq!(
msgs.pop_front().unwrap(),
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: spec.set().session, attempt: 0 },
commitments: expected_commitments
})
);
assert_eq!(msgs.len(), keys.len());
for expected in &expected_participations {
assert_eq!(&msgs.pop_front().unwrap(), expected);
}
assert!(msgs.is_empty());
}
@@ -168,149 +157,14 @@ async fn dkg_test() {
for (i, key) in keys.iter().enumerate().skip(1) {
let processors = new_processors(&mut dbs[i], key, &spec, &tributaries[i].1).await;
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
let mut expected_commitments = expected_commitments.clone();
expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap());
assert_eq!(
msgs.pop_front().unwrap(),
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: spec.set().session, attempt: 0 },
commitments: expected_commitments
})
);
assert_eq!(msgs.len(), keys.len());
for expected in &expected_participations {
assert_eq!(&msgs.pop_front().unwrap(), expected);
}
assert!(msgs.is_empty());
}
// Now do shares
let mut txs = vec![];
for (k, key) in keys.iter().enumerate() {
let attempt = 0;
let mut shares = vec![vec![]];
for i in 0 .. keys.len() {
if i != k {
let mut share = vec![0; 256];
OsRng.fill_bytes(&mut share);
shares.last_mut().unwrap().push(share);
}
}
let mut txn = dbs[k].txn();
let mut tx = Transaction::DkgShares {
attempt,
shares,
confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0),
signed: Transaction::empty_signed(),
};
txn.commit();
tx.sign(&mut OsRng, spec.genesis(), key);
txs.push(tx);
}
let block_before_tx = tributaries[0].1.tip().await;
for (i, tx) in txs.iter().enumerate().skip(1) {
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
}
for tx in txs.iter().skip(1) {
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
// With just 4 sets of shares, nothing should happen yet
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[0],
&keys[0],
&|_, _, _, _| async {
panic!("provided TX caused recognized_id to be called after some shares")
},
&processors,
&(),
&|_| async {
panic!(
"test tried to publish a new Tributary TX from handle_application_tx after some shares"
)
},
&spec,
&tributaries[0].1.reader(),
)
.await;
assert_eq!(processors.0.read().await.len(), 1);
assert!(processors.0.read().await[&spec.set().network].is_empty());
// Publish the final set of shares
let block_before_tx = tributaries[0].1.tip().await;
assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true));
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await;
sleep(Duration::from_secs(Tributary::<MemDb, Transaction, LocalP2p>::block_time().into())).await;
// Each scanner should emit a distinct shares message
let shares_for = |i: usize| {
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Shares {
id: KeyGenId { session: spec.set().session, attempt: 0 },
shares: vec![txs
.iter()
.enumerate()
.filter_map(|(l, tx)| {
if let Transaction::DkgShares { shares, .. } = tx {
if i == l {
None
} else {
let relative_i = i - (if i > l { 1 } else { 0 });
Some((
Participant::new((l + 1).try_into().unwrap()).unwrap(),
shares[0][relative_i].clone(),
))
}
} else {
panic!("txs had non-shares");
}
})
.collect::<HashMap<_, _>>()],
})
};
// Any scanner which has handled the prior blocks should only emit the new event
for (i, key) in keys.iter().enumerate() {
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[i],
key,
&|_, _, _, _| async { panic!("provided TX caused recognized_id to be called after shares") },
&processors,
&(),
&|_| async { panic!("test tried to publish a new Tributary TX from handle_application_tx") },
&spec,
&tributaries[i].1.reader(),
)
.await;
{
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
assert_eq!(msgs.pop_front().unwrap(), shares_for(i));
assert!(msgs.is_empty());
}
}
// Yet new scanners should emit all events
for (i, key) in keys.iter().enumerate() {
let processors = new_processors(&mut MemDb::new(), key, &spec, &tributaries[i].1).await;
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
let mut expected_commitments = expected_commitments.clone();
expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap());
assert_eq!(
msgs.pop_front().unwrap(),
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: spec.set().session, attempt: 0 },
commitments: expected_commitments
})
);
assert_eq!(msgs.pop_front().unwrap(), shares_for(i));
assert!(msgs.is_empty());
}
// Send DkgConfirmed
let mut substrate_key = [0; 32];
OsRng.fill_bytes(&mut substrate_key);
let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()];
@@ -319,17 +173,19 @@ async fn dkg_test() {
let mut txs = vec![];
for (i, key) in keys.iter().enumerate() {
let attempt = 0;
let mut txn = dbs[i].txn();
let share =
crate::tributary::generated_key_pair::<MemDb>(&mut txn, key, &spec, &key_pair, 0).unwrap();
txn.commit();
let mut tx = Transaction::DkgConfirmed {
// Claim we've generated the key pair
crate::tributary::generated_key_pair::<MemDb>(&mut txn, spec.genesis(), &key_pair);
// Publish the nonces
let attempt = 0;
let mut tx = Transaction::DkgConfirmationNonces {
attempt,
confirmation_share: share,
confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0),
signed: Transaction::empty_signed(),
};
txn.commit();
tx.sign(&mut OsRng, spec.genesis(), key);
txs.push(tx);
}
@@ -341,6 +197,35 @@ async fn dkg_test() {
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
// This should not cause any new processor event as the processor doesn't handle DKG confirming
for (i, key) in keys.iter().enumerate() {
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[i],
key,
&|_, _, _, _| async {
panic!("provided TX caused recognized_id to be called after DkgConfirmationNonces")
},
&processors,
&(),
// The Tributary handler should publish ConfirmationShare itself after ConfirmationNonces
&|tx| async { assert_eq!(tributaries[i].1.add_transaction(tx).await, Ok(true)) },
&spec,
&tributaries[i].1.reader(),
)
.await;
{
assert!(processors.0.read().await.get(&spec.set().network).unwrap().is_empty());
}
}
// Yet once these TXs are on-chain, the tributary should itself publish the confirmation shares
// This means in the block after the next block, the keys should be set onto Serai
// Sleep twice as long as two blocks, in case there's some stability issue
sleep(Duration::from_secs(
2 * 2 * u64::from(Tributary::<MemDb, Transaction, LocalP2p>::block_time()),
))
.await;
struct CheckPublishSetKeys {
spec: TributarySpec,
key_pair: KeyPair,
@@ -351,19 +236,24 @@ async fn dkg_test() {
&self,
_db: &(impl Sync + Get),
set: ValidatorSet,
removed: Vec<SeraiAddress>,
key_pair: KeyPair,
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
signature: Signature,
) {
assert_eq!(set, self.spec.set());
assert!(removed.is_empty());
assert_eq!(self.key_pair, key_pair);
assert!(signature.verify(
&*serai_client::validator_sets::primitives::set_keys_message(&set, &[], &key_pair),
&*serai_client::validator_sets::primitives::set_keys_message(&set, &key_pair),
&serai_client::Public(
frost::dkg::musig::musig_key::<Ristretto>(
&serai_client::validator_sets::primitives::musig_context(set),
&self.spec.validators().into_iter().map(|(validator, _)| validator).collect::<Vec<_>>()
&self
.spec
.validators()
.into_iter()
.zip(signature_participants)
.filter_map(|((validator, _), included)| included.then_some(validator))
.collect::<Vec<_>>()
)
.unwrap()
.to_bytes()

View File

@@ -6,7 +6,7 @@ use ciphersuite::{group::Group, Ciphersuite, Ristretto};
use scale::{Encode, Decode};
use serai_client::{
primitives::{SeraiAddress, Signature},
primitives::Signature,
validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet, KeyPair},
};
use processor_messages::coordinator::SubstrateSignableId;
@@ -32,8 +32,8 @@ impl PublishSeraiTransaction for () {
&self,
_db: &(impl Sync + serai_db::Get),
_set: ValidatorSet,
_removed: Vec<SeraiAddress>,
_key_pair: KeyPair,
_signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
_signature: Signature,
) {
panic!("publish_set_keys was called in test")
@@ -84,23 +84,25 @@ fn tx_size_limit() {
use tributary::TRANSACTION_SIZE_LIMIT;
let max_dkg_coefficients = (MAX_KEY_SHARES_PER_SET * 2).div_ceil(3) + 1;
let max_key_shares_per_individual = MAX_KEY_SHARES_PER_SET - max_dkg_coefficients;
// Handwave the DKG Commitments size as the size of the commitments to the coefficients and
// 1024 bytes for all overhead
let handwaved_dkg_commitments_size = (max_dkg_coefficients * MAX_KEY_LEN) + 1024;
assert!(
u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >=
(handwaved_dkg_commitments_size * max_key_shares_per_individual)
);
// n coefficients
// 2 ECDH values per recipient, and the encrypted share
let elements_outside_of_proof = max_dkg_coefficients + ((2 + 1) * MAX_KEY_SHARES_PER_SET);
// Then Pedersen Vector Commitments for each DH done, and the associated overhead in the proof
// It's handwaved as one commitment per DH, where we do 2 per coefficient and 1 for the explicit
// ECDHs
let vector_commitments = (2 * max_dkg_coefficients) + (2 * MAX_KEY_SHARES_PER_SET);
// Then we have commitments to the `t` polynomial of length 2 + 2 nc, where nc is the amount of
// commitments
let t_commitments = 2 + (2 * vector_commitments);
// The remainder of the proof should be ~30 elements
let proof_elements = 30;
// Encryption key, PoP (2 elements), message
let elements_per_share = 4;
let handwaved_dkg_shares_size =
(elements_per_share * MAX_KEY_LEN * MAX_KEY_SHARES_PER_SET) + 1024;
assert!(
u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >=
(handwaved_dkg_shares_size * max_key_shares_per_individual)
);
let handwaved_dkg_size =
((elements_outside_of_proof + vector_commitments + t_commitments + proof_elements) *
MAX_KEY_LEN) +
1024;
// Further scale by two in case of any errors in the above
assert!(u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >= (2 * handwaved_dkg_size));
}
#[test]
@@ -143,84 +145,34 @@ fn serialize_sign_data() {
#[test]
fn serialize_transaction() {
test_read_write(&Transaction::RemoveParticipantDueToDkg {
test_read_write(&Transaction::RemoveParticipant {
participant: <Ristretto as Ciphersuite>::G::random(&mut OsRng),
signed: random_signed_with_nonce(&mut OsRng, 0),
});
{
let mut commitments = vec![random_vec(&mut OsRng, 512)];
for _ in 0 .. (OsRng.next_u64() % 100) {
let mut temp = commitments[0].clone();
OsRng.fill_bytes(&mut temp);
commitments.push(temp);
}
test_read_write(&Transaction::DkgCommitments {
attempt: random_u32(&mut OsRng),
commitments,
signed: random_signed_with_nonce(&mut OsRng, 0),
});
}
test_read_write(&Transaction::DkgParticipation {
participation: random_vec(&mut OsRng, 4096),
signed: random_signed_with_nonce(&mut OsRng, 0),
});
{
// This supports a variable share length, and variable amount of sent shares, yet share length
// and sent shares is expected to be constant among recipients
let share_len = usize::try_from((OsRng.next_u64() % 512) + 1).unwrap();
let amount_of_shares = usize::try_from((OsRng.next_u64() % 3) + 1).unwrap();
// Create a valid vec of shares
let mut shares = vec![];
// Create up to 150 participants
for _ in 0 ..= (OsRng.next_u64() % 150) {
// Give each sender multiple shares
let mut sender_shares = vec![];
for _ in 0 .. amount_of_shares {
let mut share = vec![0; share_len];
OsRng.fill_bytes(&mut share);
sender_shares.push(share);
}
shares.push(sender_shares);
}
test_read_write(&Transaction::DkgConfirmationNonces {
attempt: random_u32(&mut OsRng),
confirmation_nonces: {
let mut nonces = [0; 64];
OsRng.fill_bytes(&mut nonces);
nonces
},
signed: random_signed_with_nonce(&mut OsRng, 0),
});
test_read_write(&Transaction::DkgShares {
attempt: random_u32(&mut OsRng),
shares,
confirmation_nonces: {
let mut nonces = [0; 64];
OsRng.fill_bytes(&mut nonces);
nonces
},
signed: random_signed_with_nonce(&mut OsRng, 1),
});
}
for i in 0 .. 2 {
test_read_write(&Transaction::InvalidDkgShare {
attempt: random_u32(&mut OsRng),
accuser: frost::Participant::new(
u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1),
)
.unwrap(),
faulty: frost::Participant::new(
u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1),
)
.unwrap(),
blame: if i == 0 {
None
} else {
Some(random_vec(&mut OsRng, 500)).filter(|blame| !blame.is_empty())
},
signed: random_signed_with_nonce(&mut OsRng, 2),
});
}
test_read_write(&Transaction::DkgConfirmed {
test_read_write(&Transaction::DkgConfirmationShare {
attempt: random_u32(&mut OsRng),
confirmation_share: {
let mut share = [0; 32];
OsRng.fill_bytes(&mut share);
share
},
signed: random_signed_with_nonce(&mut OsRng, 2),
signed: random_signed_with_nonce(&mut OsRng, 1),
});
{

View File

@@ -29,7 +29,7 @@ async fn sync_test() {
let mut keys = new_keys(&mut OsRng);
let spec = new_spec(&mut OsRng, &keys);
// Ensure this can have a node fail
assert!(spec.n(&[]) > spec.t());
assert!(spec.n() > spec.t());
let mut tributaries = new_tributaries(&keys, &spec)
.await
@@ -142,7 +142,7 @@ async fn sync_test() {
// Because only `t` validators are used in a commit, take n - t nodes offline
// leaving only `t` nodes. Which should force it to participate in the consensus
// of next blocks.
let spares = usize::from(spec.n(&[]) - spec.t());
let spares = usize::from(spec.n() - spec.t());
for thread in p2p_threads.iter().take(spares) {
thread.abort();
}

View File

@@ -37,15 +37,14 @@ async fn tx_test() {
usize::try_from(OsRng.next_u64() % u64::try_from(tributaries.len()).unwrap()).unwrap();
let key = keys[sender].clone();
let attempt = 0;
let mut commitments = vec![0; 256];
OsRng.fill_bytes(&mut commitments);
// Create the TX with a null signature so we can get its sig hash
let block_before_tx = tributaries[sender].1.tip().await;
let mut tx = Transaction::DkgCommitments {
attempt,
commitments: vec![commitments.clone()],
// Create the TX with a null signature so we can get its sig hash
let mut tx = Transaction::DkgParticipation {
participation: {
let mut participation = vec![0; 4096];
OsRng.fill_bytes(&mut participation);
participation
},
signed: Transaction::empty_signed(),
};
tx.sign(&mut OsRng, spec.genesis(), &key);

View File

@@ -18,7 +18,6 @@ use crate::tributary::{Label, Transaction};
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
pub enum Topic {
Dkg,
DkgConfirmation,
SubstrateSign(SubstrateSignableId),
Sign([u8; 32]),
@@ -46,15 +45,13 @@ pub enum Accumulation {
create_db!(
Tributary {
SeraiBlockNumber: (hash: [u8; 32]) -> u64,
SeraiDkgCompleted: (spec: ValidatorSet) -> [u8; 32],
SeraiDkgCompleted: (set: ValidatorSet) -> [u8; 32],
TributaryBlockNumber: (block: [u8; 32]) -> u32,
LastHandledBlock: (genesis: [u8; 32]) -> [u8; 32],
// TODO: Revisit the point of this
FatalSlashes: (genesis: [u8; 32]) -> Vec<[u8; 32]>,
RemovedAsOfDkgAttempt: (genesis: [u8; 32], attempt: u32) -> Vec<[u8; 32]>,
OfflineDuringDkg: (genesis: [u8; 32]) -> Vec<[u8; 32]>,
// TODO: Combine these two
FatallySlashed: (genesis: [u8; 32], account: [u8; 32]) -> (),
SlashPoints: (genesis: [u8; 32], account: [u8; 32]) -> u32,
@@ -67,11 +64,9 @@ create_db!(
DataReceived: (genesis: [u8; 32], data_spec: &DataSpecification) -> u16,
DataDb: (genesis: [u8; 32], data_spec: &DataSpecification, signer_bytes: &[u8; 32]) -> Vec<u8>,
DkgShare: (genesis: [u8; 32], from: u16, to: u16) -> Vec<u8>,
DkgParticipation: (genesis: [u8; 32], from: u16) -> Vec<u8>,
ConfirmationNonces: (genesis: [u8; 32], attempt: u32) -> HashMap<Participant, Vec<u8>>,
DkgKeyPair: (genesis: [u8; 32], attempt: u32) -> KeyPair,
KeyToDkgAttempt: (key: [u8; 32]) -> u32,
DkgLocallyCompleted: (genesis: [u8; 32]) -> (),
DkgKeyPair: (genesis: [u8; 32]) -> KeyPair,
PlanIds: (genesis: &[u8], block: u64) -> Vec<[u8; 32]>,
@@ -123,12 +118,12 @@ impl AttemptDb {
pub fn attempt(getter: &impl Get, genesis: [u8; 32], topic: Topic) -> Option<u32> {
let attempt = Self::get(getter, genesis, &topic);
// Don't require explicit recognition of the Dkg topic as it starts when the chain does
// Don't require explicit recognition of the DkgConfirmation topic as it starts when the chain
// does
// Don't require explicit recognition of the SlashReport topic as it isn't a DoS risk and it
// should always happen (eventually)
if attempt.is_none() &&
((topic == Topic::Dkg) ||
(topic == Topic::DkgConfirmation) ||
((topic == Topic::DkgConfirmation) ||
(topic == Topic::SubstrateSign(SubstrateSignableId::SlashReport)))
{
return Some(0);
@@ -155,16 +150,12 @@ impl ReattemptDb {
// 5 minutes for attempts 0 ..= 2, 10 minutes for attempts 3 ..= 5, 15 minutes for attempts > 5
// Assumes no event will take longer than 15 minutes, yet grows the time in case there are
// network bandwidth issues
let mut reattempt_delay = BASE_REATTEMPT_DELAY *
let reattempt_delay = BASE_REATTEMPT_DELAY *
((AttemptDb::attempt(txn, genesis, topic)
.expect("scheduling re-attempt for unknown topic") /
3) +
1)
.min(3);
// Allow more time for DKGs since they have an extra round and much more data
if matches!(topic, Topic::Dkg) {
reattempt_delay *= 4;
}
let upon_block = current_block_number + reattempt_delay;
let mut reattempts = Self::get(txn, genesis, upon_block).unwrap_or(vec![]);

View File

@@ -13,7 +13,7 @@ use serai_client::{Signature, validator_sets::primitives::KeyPair};
use tributary::{Signed, TransactionKind, TransactionTrait};
use processor_messages::{
key_gen::{self, KeyGenId},
key_gen::self,
coordinator::{self, SubstrateSignableId, SubstrateSignId},
sign::{self, SignId},
};
@@ -38,33 +38,20 @@ pub fn dkg_confirmation_nonces(
txn: &mut impl DbTxn,
attempt: u32,
) -> [u8; 64] {
DkgConfirmer::new(key, spec, txn, attempt)
.expect("getting DKG confirmation nonces for unknown attempt")
.preprocess()
DkgConfirmer::new(key, spec, txn, attempt).preprocess()
}
pub fn generated_key_pair<D: Db>(
txn: &mut D::Transaction<'_>,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
spec: &TributarySpec,
genesis: [u8; 32],
key_pair: &KeyPair,
attempt: u32,
) -> Result<[u8; 32], Participant> {
DkgKeyPair::set(txn, spec.genesis(), attempt, key_pair);
KeyToDkgAttempt::set(txn, key_pair.0 .0, &attempt);
let preprocesses = ConfirmationNonces::get(txn, spec.genesis(), attempt).unwrap();
DkgConfirmer::new(key, spec, txn, attempt)
.expect("claiming to have generated a key pair for an unrecognized attempt")
.share(preprocesses, key_pair)
) {
DkgKeyPair::set(txn, genesis, key_pair);
}
fn unflatten(
spec: &TributarySpec,
removed: &[<Ristretto as Ciphersuite>::G],
data: &mut HashMap<Participant, Vec<u8>>,
) {
fn unflatten(spec: &TributarySpec, data: &mut HashMap<Participant, Vec<u8>>) {
for (validator, _) in spec.validators() {
let Some(range) = spec.i(removed, validator) else { continue };
let Some(range) = spec.i(validator) else { continue };
let Some(all_segments) = data.remove(&range.start) else {
continue;
};
@@ -88,7 +75,6 @@ impl<
{
fn accumulate(
&mut self,
removed: &[<Ristretto as Ciphersuite>::G],
data_spec: &DataSpecification,
signer: <Ristretto as Ciphersuite>::G,
data: &Vec<u8>,
@@ -99,10 +85,7 @@ impl<
panic!("accumulating data for a participant multiple times");
}
let signer_shares = {
let Some(signer_i) = self.spec.i(removed, signer) else {
log::warn!("accumulating data from {} who was removed", hex::encode(signer.to_bytes()));
return Accumulation::NotReady;
};
let signer_i = self.spec.i(signer).expect("transaction signer wasn't a member of the set");
u16::from(signer_i.end) - u16::from(signer_i.start)
};
@@ -115,11 +98,7 @@ impl<
// If 2/3rds of the network participated in this preprocess, queue it for an automatic
// re-attempt
// DkgConfirmation doesn't have a re-attempt as it's just an extension for Dkg
if (data_spec.label == Label::Preprocess) &&
received_range.contains(&self.spec.t()) &&
(data_spec.topic != Topic::DkgConfirmation)
{
if (data_spec.label == Label::Preprocess) && received_range.contains(&self.spec.t()) {
// Double check the attempt on this entry, as we don't want to schedule a re-attempt if this
// is an old entry
// This is an assert, not part of the if check, as old data shouldn't be here in the first
@@ -129,10 +108,7 @@ impl<
}
// If we have all the needed commitments/preprocesses/shares, tell the processor
let needs_everyone =
(data_spec.topic == Topic::Dkg) || (data_spec.topic == Topic::DkgConfirmation);
let needed = if needs_everyone { self.spec.n(removed) } else { self.spec.t() };
if received_range.contains(&needed) {
if received_range.contains(&self.spec.t()) {
log::debug!(
"accumulation for entry {:?} attempt #{} is ready",
&data_spec.topic,
@@ -141,7 +117,7 @@ impl<
let mut data = HashMap::new();
for validator in self.spec.validators().iter().map(|validator| validator.0) {
let Some(i) = self.spec.i(removed, validator) else { continue };
let Some(i) = self.spec.i(validator) else { continue };
data.insert(
i.start,
if let Some(data) = DataDb::get(self.txn, genesis, data_spec, &validator.to_bytes()) {
@@ -152,10 +128,10 @@ impl<
);
}
assert_eq!(data.len(), usize::from(needed));
assert_eq!(data.len(), usize::from(self.spec.t()));
// Remove our own piece of data, if we were involved
if let Some(i) = self.spec.i(removed, Ristretto::generator() * self.our_key.deref()) {
if let Some(i) = self.spec.i(Ristretto::generator() * self.our_key.deref()) {
if data.remove(&i.start).is_some() {
return Accumulation::Ready(DataSet::Participating(data));
}
@@ -167,7 +143,6 @@ impl<
fn handle_data(
&mut self,
removed: &[<Ristretto as Ciphersuite>::G],
data_spec: &DataSpecification,
bytes: &Vec<u8>,
signed: &Signed,
@@ -213,21 +188,15 @@ impl<
// TODO: If this is shares, we need to check they are part of the selected signing set
// Accumulate this data
self.accumulate(removed, data_spec, signed.signer, bytes)
self.accumulate(data_spec, signed.signer, bytes)
}
fn check_sign_data_len(
&mut self,
removed: &[<Ristretto as Ciphersuite>::G],
signer: <Ristretto as Ciphersuite>::G,
len: usize,
) -> Result<(), ()> {
let Some(signer_i) = self.spec.i(removed, signer) else {
// TODO: Ensure processor doesn't so participate/check how it handles removals for being
// offline
self.fatal_slash(signer.to_bytes(), "signer participated despite being removed");
Err(())?
};
let signer_i = self.spec.i(signer).expect("signer wasn't a member of the set");
if len != usize::from(u16::from(signer_i.end) - u16::from(signer_i.start)) {
self.fatal_slash(
signer.to_bytes(),
@@ -254,12 +223,9 @@ impl<
}
match tx {
Transaction::RemoveParticipantDueToDkg { participant, signed } => {
if self.spec.i(&[], participant).is_none() {
self.fatal_slash(
participant.to_bytes(),
"RemoveParticipantDueToDkg vote for non-validator",
);
Transaction::RemoveParticipant { participant, signed } => {
if self.spec.i(participant).is_none() {
self.fatal_slash(participant.to_bytes(), "RemoveParticipant vote for non-validator");
return;
}
@@ -274,268 +240,106 @@ impl<
let prior_votes = VotesToRemove::get(self.txn, genesis, participant).unwrap_or(0);
let signer_votes =
self.spec.i(&[], signed.signer).expect("signer wasn't a validator for this network?");
self.spec.i(signed.signer).expect("signer wasn't a validator for this network?");
let new_votes = prior_votes + u16::from(signer_votes.end) - u16::from(signer_votes.start);
VotesToRemove::set(self.txn, genesis, participant, &new_votes);
if ((prior_votes + 1) ..= new_votes).contains(&self.spec.t()) {
self.fatal_slash(participant, "RemoveParticipantDueToDkg vote")
self.fatal_slash(participant, "RemoveParticipant vote")
}
}
Transaction::DkgCommitments { attempt, commitments, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self.fatal_slash(signed.signer.to_bytes(), "DkgCommitments with an unrecognized attempt");
return;
};
let Ok(()) = self.check_sign_data_len(&removed, signed.signer, commitments.len()) else {
return;
};
let data_spec = DataSpecification { topic: Topic::Dkg, label: Label::Preprocess, attempt };
match self.handle_data(&removed, &data_spec, &commitments.encode(), &signed) {
Accumulation::Ready(DataSet::Participating(mut commitments)) => {
log::info!("got all DkgCommitments for {}", hex::encode(genesis));
unflatten(self.spec, &removed, &mut commitments);
self
.processors
.send(
self.spec.set().network,
key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: self.spec.set().session, attempt },
commitments,
},
)
.await;
}
Accumulation::Ready(DataSet::NotParticipating) => {
assert!(
removed.contains(&(Ristretto::generator() * self.our_key.deref())),
"NotParticipating in a DkgCommitments we weren't removed for"
);
}
Accumulation::NotReady => {}
}
}
Transaction::DkgShares { attempt, mut shares, confirmation_nonces, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self.fatal_slash(signed.signer.to_bytes(), "DkgShares with an unrecognized attempt");
return;
};
let not_participating = removed.contains(&(Ristretto::generator() * self.our_key.deref()));
let Ok(()) = self.check_sign_data_len(&removed, signed.signer, shares.len()) else {
return;
};
let Some(sender_i) = self.spec.i(&removed, signed.signer) else {
self.fatal_slash(
signed.signer.to_bytes(),
"DkgShares for a DKG they aren't participating in",
);
return;
};
let sender_is_len = u16::from(sender_i.end) - u16::from(sender_i.start);
for shares in &shares {
if shares.len() != (usize::from(self.spec.n(&removed) - sender_is_len)) {
self.fatal_slash(signed.signer.to_bytes(), "invalid amount of DKG shares");
return;
}
}
// Save each share as needed for blame
for (from_offset, shares) in shares.iter().enumerate() {
let from =
Participant::new(u16::from(sender_i.start) + u16::try_from(from_offset).unwrap())
.unwrap();
for (to_offset, share) in shares.iter().enumerate() {
// 0-indexed (the enumeration) to 1-indexed (Participant)
let mut to = u16::try_from(to_offset).unwrap() + 1;
// Adjust for the omission of the sender's own shares
if to >= u16::from(sender_i.start) {
to += u16::from(sender_i.end) - u16::from(sender_i.start);
}
let to = Participant::new(to).unwrap();
DkgShare::set(self.txn, genesis, from.into(), to.into(), share);
}
}
// Filter down to only our share's bytes for handle
let our_shares = if let Some(our_i) =
self.spec.i(&removed, Ristretto::generator() * self.our_key.deref())
{
if sender_i == our_i {
vec![]
} else {
// 1-indexed to 0-indexed
let mut our_i_pos = u16::from(our_i.start) - 1;
// Handle the omission of the sender's own data
if u16::from(our_i.start) > u16::from(sender_i.start) {
our_i_pos -= sender_is_len;
}
let our_i_pos = usize::from(our_i_pos);
shares
.iter_mut()
.map(|shares| {
shares
.drain(
our_i_pos ..
(our_i_pos + usize::from(u16::from(our_i.end) - u16::from(our_i.start))),
)
.collect::<Vec<_>>()
})
.collect()
}
} else {
assert!(
not_participating,
"we didn't have an i while handling DkgShares we weren't removed for"
);
// Since we're not participating, simply save vec![] for our shares
vec![]
};
// Drop shares as it's presumably been mutated into invalidity
drop(shares);
let data_spec = DataSpecification { topic: Topic::Dkg, label: Label::Share, attempt };
let encoded_data = (confirmation_nonces.to_vec(), our_shares.encode()).encode();
match self.handle_data(&removed, &data_spec, &encoded_data, &signed) {
Accumulation::Ready(DataSet::Participating(confirmation_nonces_and_shares)) => {
log::info!("got all DkgShares for {}", hex::encode(genesis));
let mut confirmation_nonces = HashMap::new();
let mut shares = HashMap::new();
for (participant, confirmation_nonces_and_shares) in confirmation_nonces_and_shares {
let (these_confirmation_nonces, these_shares) =
<(Vec<u8>, Vec<u8>)>::decode(&mut confirmation_nonces_and_shares.as_slice())
.unwrap();
confirmation_nonces.insert(participant, these_confirmation_nonces);
shares.insert(participant, these_shares);
}
ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces);
// shares is a HashMap<Participant, Vec<Vec<Vec<u8>>>>, with the values representing:
// - Each of the sender's shares
// - Each of the our shares
// - Each share
// We need a Vec<HashMap<Participant, Vec<u8>>>, with the outer being each of ours
let mut expanded_shares = vec![];
for (sender_start_i, shares) in shares {
let shares: Vec<Vec<Vec<u8>>> = Vec::<_>::decode(&mut shares.as_slice()).unwrap();
for (sender_i_offset, our_shares) in shares.into_iter().enumerate() {
for (our_share_i, our_share) in our_shares.into_iter().enumerate() {
if expanded_shares.len() <= our_share_i {
expanded_shares.push(HashMap::new());
}
expanded_shares[our_share_i].insert(
Participant::new(
u16::from(sender_start_i) + u16::try_from(sender_i_offset).unwrap(),
)
.unwrap(),
our_share,
);
}
}
}
self
.processors
.send(
self.spec.set().network,
key_gen::CoordinatorMessage::Shares {
id: KeyGenId { session: self.spec.set().session, attempt },
shares: expanded_shares,
},
)
.await;
}
Accumulation::Ready(DataSet::NotParticipating) => {
assert!(not_participating, "NotParticipating in a DkgShares we weren't removed for");
}
Accumulation::NotReady => {}
}
}
Transaction::InvalidDkgShare { attempt, accuser, faulty, blame, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self
.fatal_slash(signed.signer.to_bytes(), "InvalidDkgShare with an unrecognized attempt");
return;
};
let Some(range) = self.spec.i(&removed, signed.signer) else {
self.fatal_slash(
signed.signer.to_bytes(),
"InvalidDkgShare for a DKG they aren't participating in",
);
return;
};
if !range.contains(&accuser) {
self.fatal_slash(
signed.signer.to_bytes(),
"accused with a Participant index which wasn't theirs",
);
return;
}
if range.contains(&faulty) {
self.fatal_slash(signed.signer.to_bytes(), "accused self of having an InvalidDkgShare");
return;
}
let Some(share) = DkgShare::get(self.txn, genesis, accuser.into(), faulty.into()) else {
self.fatal_slash(
signed.signer.to_bytes(),
"InvalidDkgShare had a non-existent faulty participant",
);
return;
};
Transaction::DkgParticipation { participation, signed } => {
// Send the participation to the processor
self
.processors
.send(
self.spec.set().network,
key_gen::CoordinatorMessage::VerifyBlame {
id: KeyGenId { session: self.spec.set().session, attempt },
accuser,
accused: faulty,
share,
blame,
key_gen::CoordinatorMessage::Participation {
session: self.spec.set().session,
participant: self
.spec
.i(signed.signer)
.expect("signer wasn't a validator for this network?")
.start,
participation,
},
)
.await;
}
Transaction::DkgConfirmed { attempt, confirmation_share, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self.fatal_slash(signed.signer.to_bytes(), "DkgConfirmed with an unrecognized attempt");
return;
};
Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => {
let data_spec =
DataSpecification { topic: Topic::DkgConfirmation, label: Label::Preprocess, attempt };
match self.handle_data(&data_spec, &confirmation_nonces.to_vec(), &signed) {
Accumulation::Ready(DataSet::Participating(confirmation_nonces)) => {
log::info!(
"got all DkgConfirmationNonces for {}, attempt {attempt}",
hex::encode(genesis)
);
ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces);
// Send the expected DkgConfirmationShare
// TODO: Slight race condition here due to set, publish tx, then commit txn
let key_pair = DkgKeyPair::get(self.txn, genesis)
.expect("participating in confirming key we don't have");
let mut tx = match DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt)
.share(confirmation_nonces, &key_pair)
{
Ok(confirmation_share) => Transaction::DkgConfirmationShare {
attempt,
confirmation_share,
signed: Transaction::empty_signed(),
},
Err(participant) => Transaction::RemoveParticipant {
participant: self.spec.reverse_lookup_i(participant).unwrap(),
signed: Transaction::empty_signed(),
},
};
tx.sign(&mut OsRng, genesis, self.our_key);
self.publish_tributary_tx.publish_tributary_tx(tx).await;
}
Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {}
}
}
Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => {
let data_spec =
DataSpecification { topic: Topic::DkgConfirmation, label: Label::Share, attempt };
match self.handle_data(&removed, &data_spec, &confirmation_share.to_vec(), &signed) {
match self.handle_data(&data_spec, &confirmation_share.to_vec(), &signed) {
Accumulation::Ready(DataSet::Participating(shares)) => {
log::info!("got all DkgConfirmed for {}", hex::encode(genesis));
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
panic!(
"DkgConfirmed for everyone yet didn't have the removed parties for this attempt",
);
};
log::info!(
"got all DkgConfirmationShare for {}, attempt {attempt}",
hex::encode(genesis)
);
let preprocesses = ConfirmationNonces::get(self.txn, genesis, attempt).unwrap();
// TODO: This can technically happen under very very very specific timing as the txn
// put happens before DkgConfirmed, yet the txn commit isn't guaranteed to
let key_pair = DkgKeyPair::get(self.txn, genesis, attempt).expect(
"in DkgConfirmed handling, which happens after everyone \
(including us) fires DkgConfirmed, yet no confirming key pair",
// put happens before DkgConfirmationShare, yet the txn isn't guaranteed to be
// committed
let key_pair = DkgKeyPair::get(self.txn, genesis).expect(
"in DkgConfirmationShare handling, which happens after everyone \
(including us) fires DkgConfirmationShare, yet no confirming key pair",
);
let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt)
.expect("confirming DKG for unrecognized attempt");
// Determine the bitstring representing who participated before we move `shares`
let validators = self.spec.validators();
let mut signature_participants = bitvec::vec::BitVec::with_capacity(validators.len());
for (participant, _) in validators {
signature_participants.push(
(participant == (<Ristretto as Ciphersuite>::generator() * self.our_key.deref())) ||
shares.contains_key(&self.spec.i(participant).unwrap().start),
);
}
// Produce the final signature
let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt);
let sig = match confirmer.complete(preprocesses, &key_pair, shares) {
Ok(sig) => sig,
Err(p) => {
let mut tx = Transaction::RemoveParticipantDueToDkg {
participant: self.spec.reverse_lookup_i(&removed, p).unwrap(),
let mut tx = Transaction::RemoveParticipant {
participant: self.spec.reverse_lookup_i(p).unwrap(),
signed: Transaction::empty_signed(),
};
tx.sign(&mut OsRng, genesis, self.our_key);
@@ -544,23 +348,18 @@ impl<
}
};
DkgLocallyCompleted::set(self.txn, genesis, &());
self
.publish_serai_tx
.publish_set_keys(
self.db,
self.spec.set(),
removed.into_iter().map(|key| key.to_bytes().into()).collect(),
key_pair,
signature_participants,
Signature(sig),
)
.await;
}
Accumulation::Ready(DataSet::NotParticipating) => {
panic!("wasn't a participant in DKG confirmination shares")
}
Accumulation::NotReady => {}
Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {}
}
}
@@ -618,19 +417,8 @@ impl<
}
Transaction::SubstrateSign(data) => {
// Provided transactions ensure synchrony on any signing protocol, and we won't start
// signing with threshold keys before we've confirmed them on-chain
let Some(removed) =
crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis)
else {
self.fatal_slash(
data.signed.signer.to_bytes(),
"signing despite not having set keys on substrate",
);
return;
};
let signer = data.signed.signer;
let Ok(()) = self.check_sign_data_len(&removed, signer, data.data.len()) else {
let Ok(()) = self.check_sign_data_len(signer, data.data.len()) else {
return;
};
let expected_len = match data.label {
@@ -653,11 +441,11 @@ impl<
attempt: data.attempt,
};
let Accumulation::Ready(DataSet::Participating(mut results)) =
self.handle_data(&removed, &data_spec, &data.data.encode(), &data.signed)
self.handle_data(&data_spec, &data.data.encode(), &data.signed)
else {
return;
};
unflatten(self.spec, &removed, &mut results);
unflatten(self.spec, &mut results);
let id = SubstrateSignId {
session: self.spec.set().session,
@@ -678,16 +466,7 @@ impl<
}
Transaction::Sign(data) => {
let Some(removed) =
crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis)
else {
self.fatal_slash(
data.signed.signer.to_bytes(),
"signing despite not having set keys on substrate",
);
return;
};
let Ok(()) = self.check_sign_data_len(&removed, data.signed.signer, data.data.len()) else {
let Ok(()) = self.check_sign_data_len(data.signed.signer, data.data.len()) else {
return;
};
@@ -697,9 +476,9 @@ impl<
attempt: data.attempt,
};
if let Accumulation::Ready(DataSet::Participating(mut results)) =
self.handle_data(&removed, &data_spec, &data.data.encode(), &data.signed)
self.handle_data(&data_spec, &data.data.encode(), &data.signed)
{
unflatten(self.spec, &removed, &mut results);
unflatten(self.spec, &mut results);
let id =
SignId { session: self.spec.set().session, id: data.plan, attempt: data.attempt };
self
@@ -740,8 +519,7 @@ impl<
}
Transaction::SlashReport(points, signed) => {
// Uses &[] as we only need the length which is independent to who else was removed
let signer_range = self.spec.i(&[], signed.signer).unwrap();
let signer_range = self.spec.i(signed.signer).unwrap();
let signer_len = u16::from(signer_range.end) - u16::from(signer_range.start);
if points.len() != (self.spec.validators().len() - 1) {
self.fatal_slash(

View File

@@ -1,7 +1,3 @@
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use serai_client::validator_sets::primitives::ValidatorSet;
use tributary::{
ReadWrite,
transaction::{TransactionError, TransactionKind, Transaction as TransactionTrait},
@@ -24,39 +20,6 @@ pub use handle::*;
pub mod scanner;
pub fn removed_as_of_dkg_attempt(
getter: &impl Get,
genesis: [u8; 32],
attempt: u32,
) -> Option<Vec<<Ristretto as Ciphersuite>::G>> {
if attempt == 0 {
Some(vec![])
} else {
RemovedAsOfDkgAttempt::get(getter, genesis, attempt).map(|keys| {
keys.iter().map(|key| <Ristretto as Ciphersuite>::G::from_bytes(key).unwrap()).collect()
})
}
}
pub fn removed_as_of_set_keys(
getter: &impl Get,
set: ValidatorSet,
genesis: [u8; 32],
) -> Option<Vec<<Ristretto as Ciphersuite>::G>> {
// SeraiDkgCompleted has the key placed on-chain.
// This key can be uniquely mapped to an attempt so long as one participant was honest, which we
// assume as a presumably honest participant.
// Resolve from generated key to attempt to fatally slashed as of attempt.
// This expect will trigger if this is prematurely called and Substrate has tracked the keys yet
// we haven't locally synced and handled the Tributary
// All callers of this, at the time of writing, ensure the Tributary has sufficiently synced
// making the panic with context more desirable than the None
let attempt = KeyToDkgAttempt::get(getter, SeraiDkgCompleted::get(getter, set)?)
.expect("key completed on-chain didn't have an attempt related");
removed_as_of_dkg_attempt(getter, genesis, attempt)
}
pub async fn publish_signed_transaction<D: Db, P: crate::P2p>(
txn: &mut D::Transaction<'_>,
tributary: &Tributary<D, Transaction, P>,

View File

@@ -1,15 +1,17 @@
use core::{marker::PhantomData, ops::Deref, future::Future, time::Duration};
use std::{sync::Arc, collections::HashSet};
use core::{marker::PhantomData, future::Future, time::Duration};
use std::sync::Arc;
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use tokio::sync::broadcast;
use scale::{Encode, Decode};
use serai_client::{
primitives::{SeraiAddress, Signature},
primitives::Signature,
validator_sets::primitives::{KeyPair, ValidatorSet},
Serai,
};
@@ -67,8 +69,8 @@ pub trait PublishSeraiTransaction {
&self,
db: &(impl Sync + Get),
set: ValidatorSet,
removed: Vec<SeraiAddress>,
key_pair: KeyPair,
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
signature: Signature,
);
}
@@ -129,17 +131,12 @@ mod impl_pst_for_serai {
&self,
db: &(impl Sync + Get),
set: ValidatorSet,
removed: Vec<SeraiAddress>,
key_pair: KeyPair,
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
signature: Signature,
) {
// TODO: BoundedVec as an arg to avoid this expect
let tx = SeraiValidatorSets::set_keys(
set.network,
removed.try_into().expect("removing more than allowed"),
key_pair,
signature,
);
let tx =
SeraiValidatorSets::set_keys(set.network, key_pair, signature_participants, signature);
async fn check(serai: SeraiValidatorSets<'_>, set: ValidatorSet, (): ()) -> bool {
if matches!(serai.keys(set).await, Ok(Some(_))) {
log::info!("another coordinator set key pair for {:?}", set);
@@ -249,18 +246,15 @@ impl<
let genesis = self.spec.genesis();
let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis);
// Calculate the shares still present, spinning if not enough are
// still_present_shares is used by a below branch, yet it's a natural byproduct of checking if
// we should spin, hence storing it in a variable here
let still_present_shares = {
{
// Start with the original n value
let mut present_shares = self.spec.n(&[]);
let mut present_shares = self.spec.n();
// Remove everyone fatally slashed
let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis);
for removed in &current_fatal_slashes {
let original_i_for_removed =
self.spec.i(&[], *removed).expect("removed party was never present");
self.spec.i(*removed).expect("removed party was never present");
let removed_shares =
u16::from(original_i_for_removed.end) - u16::from(original_i_for_removed.start);
present_shares -= removed_shares;
@@ -276,79 +270,17 @@ impl<
tokio::time::sleep(core::time::Duration::from_secs(60)).await;
}
}
present_shares
};
}
for topic in ReattemptDb::take(self.txn, genesis, self.block_number) {
let attempt = AttemptDb::start_next_attempt(self.txn, genesis, topic);
log::info!("re-attempting {topic:?} with attempt {attempt}");
log::info!("potentially re-attempting {topic:?} with attempt {attempt}");
// Slash people who failed to participate as expected in the prior attempt
{
let prior_attempt = attempt - 1;
let (removed, expected_participants) = match topic {
Topic::Dkg => {
// Every validator who wasn't removed is expected to have participated
let removed =
crate::tributary::removed_as_of_dkg_attempt(self.txn, genesis, prior_attempt)
.expect("prior attempt didn't have its removed saved to disk");
let removed_set = removed.iter().copied().collect::<HashSet<_>>();
(
removed,
self
.spec
.validators()
.into_iter()
.filter_map(|(validator, _)| {
Some(validator).filter(|validator| !removed_set.contains(validator))
})
.collect(),
)
}
Topic::DkgConfirmation => {
panic!("TODO: re-attempting DkgConfirmation when we should be re-attempting the Dkg")
}
Topic::SubstrateSign(_) | Topic::Sign(_) => {
let removed =
crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis)
.expect("SubstrateSign/Sign yet have yet to set keys");
// TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![]
let expected_participants = vec![];
(removed, expected_participants)
}
};
let (expected_topic, expected_label) = match topic {
Topic::Dkg => {
let n = self.spec.n(&removed);
// If we got all the DKG shares, we should be on DKG confirmation
let share_spec =
DataSpecification { topic: Topic::Dkg, label: Label::Share, attempt: prior_attempt };
if DataReceived::get(self.txn, genesis, &share_spec).unwrap_or(0) == n {
// Label::Share since there is no Label::Preprocess for DkgConfirmation since the
// preprocess is part of Topic::Dkg Label::Share
(Topic::DkgConfirmation, Label::Share)
} else {
let preprocess_spec = DataSpecification {
topic: Topic::Dkg,
label: Label::Preprocess,
attempt: prior_attempt,
};
// If we got all the DKG preprocesses, DKG shares
if DataReceived::get(self.txn, genesis, &preprocess_spec).unwrap_or(0) == n {
// Label::Share since there is no Label::Preprocess for DkgConfirmation since the
// preprocess is part of Topic::Dkg Label::Share
(Topic::Dkg, Label::Share)
} else {
(Topic::Dkg, Label::Preprocess)
}
}
}
Topic::DkgConfirmation => unreachable!(),
// If we got enough participants to move forward, then we expect shares from them all
Topic::SubstrateSign(_) | Topic::Sign(_) => (topic, Label::Share),
};
// TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![]
let expected_participants: Vec<<Ristretto as Ciphersuite>::G> = vec![];
let mut did_not_participate = vec![];
for expected_participant in expected_participants {
@@ -356,8 +288,9 @@ impl<
self.txn,
genesis,
&DataSpecification {
topic: expected_topic,
label: expected_label,
topic,
// Since we got the preprocesses, we were supposed to get the shares
label: Label::Share,
attempt: prior_attempt,
},
&expected_participant.to_bytes(),
@@ -373,15 +306,8 @@ impl<
// Accordingly, clear did_not_participate
// TODO
// If during the DKG, explicitly mark these people as having been offline
// TODO: If they were offline sufficiently long ago, don't strike them off
if topic == Topic::Dkg {
let mut existing = OfflineDuringDkg::get(self.txn, genesis).unwrap_or(vec![]);
for did_not_participate in did_not_participate {
existing.push(did_not_participate.to_bytes());
}
OfflineDuringDkg::set(self.txn, genesis, &existing);
}
// TODO: Increment the slash points of people who didn't preprocess in some expected window
// of time
// Slash everyone who didn't participate as expected
// This may be overzealous as if a minority detects a completion, they'll abort yet the
@@ -411,75 +337,22 @@ impl<
then preprocesses. This only sends preprocesses).
*/
match topic {
Topic::Dkg => {
let mut removed = current_fatal_slashes.clone();
Topic::DkgConfirmation => {
if SeraiDkgCompleted::get(self.txn, self.spec.set()).is_none() {
log::info!("re-attempting DKG confirmation with attempt {attempt}");
let t = self.spec.t();
{
let mut present_shares = still_present_shares;
// Load the parties marked as offline across the various attempts
let mut offline = OfflineDuringDkg::get(self.txn, genesis)
.unwrap_or(vec![])
.iter()
.map(|key| <Ristretto as Ciphersuite>::G::from_bytes(key).unwrap())
.collect::<Vec<_>>();
// Pop from the list to prioritize the removal of those recently offline
while let Some(offline) = offline.pop() {
// Make sure they weren't removed already (such as due to being fatally slashed)
// This also may trigger if they were offline across multiple attempts
if removed.contains(&offline) {
continue;
}
// If we can remove them and still meet the threshold, do so
let original_i_for_offline =
self.spec.i(&[], offline).expect("offline was never present?");
let offline_shares =
u16::from(original_i_for_offline.end) - u16::from(original_i_for_offline.start);
if (present_shares - offline_shares) >= t {
present_shares -= offline_shares;
removed.push(offline);
}
// If we've removed as many people as we can, break
if present_shares == t {
break;
}
}
}
RemovedAsOfDkgAttempt::set(
self.txn,
genesis,
attempt,
&removed.iter().map(<Ristretto as Ciphersuite>::G::to_bytes).collect(),
);
if DkgLocallyCompleted::get(self.txn, genesis).is_none() {
let Some(our_i) = self.spec.i(&removed, Ristretto::generator() * self.our_key.deref())
else {
continue;
// Since it wasn't completed, publish our nonces for the next attempt
let confirmation_nonces =
crate::tributary::dkg_confirmation_nonces(self.our_key, self.spec, self.txn, attempt);
let mut tx = Transaction::DkgConfirmationNonces {
attempt,
confirmation_nonces,
signed: Transaction::empty_signed(),
};
// Since it wasn't completed, instruct the processor to start the next attempt
let id =
processor_messages::key_gen::KeyGenId { session: self.spec.set().session, attempt };
let params =
frost::ThresholdParams::new(t, self.spec.n(&removed), our_i.start).unwrap();
let shares = u16::from(our_i.end) - u16::from(our_i.start);
self
.processors
.send(
self.spec.set().network,
processor_messages::key_gen::CoordinatorMessage::GenerateKey { id, params, shares },
)
.await;
tx.sign(&mut OsRng, genesis, self.our_key);
self.publish_tributary_tx.publish_tributary_tx(tx).await;
}
}
Topic::DkgConfirmation => unreachable!(),
Topic::SubstrateSign(inner_id) => {
let id = processor_messages::coordinator::SubstrateSignId {
session: self.spec.set().session,
@@ -496,6 +369,8 @@ impl<
crate::cosign_evaluator::LatestCosign::get(self.txn, self.spec.set().network)
.map_or(0, |cosign| cosign.block_number);
if latest_cosign < block_number {
log::info!("re-attempting cosigning {block_number:?} with attempt {attempt}");
// Instruct the processor to start the next attempt
self
.processors
@@ -512,6 +387,8 @@ impl<
SubstrateSignableId::Batch(batch) => {
// If the Batch hasn't appeared on-chain...
if BatchInstructionsHashDb::get(self.txn, self.spec.set().network, batch).is_none() {
log::info!("re-attempting signing batch {batch:?} with attempt {attempt}");
// Instruct the processor to start the next attempt
// The processor won't continue if it's already signed a Batch
// Prior checking if the Batch is on-chain just may reduce the non-participating
@@ -529,6 +406,11 @@ impl<
// If this Tributary hasn't been retired...
// (published SlashReport/took too long to do so)
if crate::RetiredTributaryDb::get(self.txn, self.spec.set()).is_none() {
log::info!(
"re-attempting signing slash report for {:?} with attempt {attempt}",
self.spec.set()
);
let report = SlashReport::get(self.txn, self.spec.set())
.expect("re-attempting signing a SlashReport we don't have?");
self
@@ -575,8 +457,7 @@ impl<
};
// Assign them 0 points for themselves
report.insert(i, 0);
// Uses &[] as we only need the length which is independent to who else was removed
let signer_i = self.spec.i(&[], validator).unwrap();
let signer_i = self.spec.i(validator).unwrap();
let signer_len = u16::from(signer_i.end) - u16::from(signer_i.start);
// Push `n` copies, one for each of their shares
for _ in 0 .. signer_len {

View File

@@ -55,7 +55,7 @@
*/
use core::ops::Deref;
use std::collections::HashMap;
use std::collections::{HashSet, HashMap};
use zeroize::{Zeroize, Zeroizing};
@@ -63,10 +63,7 @@ use rand_core::OsRng;
use blake2::{Digest, Blake2s256};
use ciphersuite::{
group::{ff::PrimeField, GroupEncoding},
Ciphersuite, Ristretto,
};
use ciphersuite::{group::ff::PrimeField, Ciphersuite, Ristretto};
use frost::{
FrostError,
dkg::{Participant, musig::musig},
@@ -77,10 +74,8 @@ use frost_schnorrkel::Schnorrkel;
use scale::Encode;
use serai_client::{
Public,
validator_sets::primitives::{KeyPair, musig_context, set_keys_message},
};
#[rustfmt::skip]
use serai_client::validator_sets::primitives::{ValidatorSet, KeyPair, musig_context, set_keys_message};
use serai_db::*;
@@ -89,6 +84,7 @@ use crate::tributary::TributarySpec;
create_db!(
SigningProtocolDb {
CachedPreprocesses: (context: &impl Encode) -> [u8; 32]
DataSignedWith: (context: &impl Encode) -> (Vec<u8>, HashMap<Participant, Vec<u8>>),
}
);
@@ -117,16 +113,22 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
};
let encryption_key_slice: &mut [u8] = encryption_key.as_mut();
let algorithm = Schnorrkel::new(b"substrate");
// Create the MuSig keys
let keys: ThresholdKeys<Ristretto> =
musig(&musig_context(self.spec.set()), self.key, participants)
.expect("signing for a set we aren't in/validator present multiple times")
.into();
// Define the algorithm
let algorithm = Schnorrkel::new(b"substrate");
// Check if we've prior preprocessed
if CachedPreprocesses::get(self.txn, &self.context).is_none() {
// If we haven't, we create a machine solely to obtain the preprocess with
let (machine, _) =
AlgorithmMachine::new(algorithm.clone(), keys.clone()).preprocess(&mut OsRng);
// Cache and save the preprocess to disk
let mut cache = machine.cache();
assert_eq!(cache.0.len(), 32);
#[allow(clippy::needless_range_loop)]
@@ -137,13 +139,15 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
CachedPreprocesses::set(self.txn, &self.context, &cache.0);
}
// We're now guaranteed to have the preprocess, hence why this `unwrap` is safe
let cached = CachedPreprocesses::get(self.txn, &self.context).unwrap();
let mut cached: Zeroizing<[u8; 32]> = Zeroizing::new(cached);
let mut cached = Zeroizing::new(cached);
#[allow(clippy::needless_range_loop)]
for b in 0 .. 32 {
cached[b] ^= encryption_key_slice[b];
}
encryption_key_slice.zeroize();
// Create the machine from the cached preprocess
let (machine, preprocess) =
AlgorithmSignMachine::from_cache(algorithm, keys, CachedPreprocess(cached));
@@ -156,8 +160,29 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
mut serialized_preprocesses: HashMap<Participant, Vec<u8>>,
msg: &[u8],
) -> Result<(AlgorithmSignatureMachine<Ristretto, Schnorrkel>, [u8; 32]), Participant> {
let machine = self.preprocess_internal(participants).0;
// We can't clear the preprocess as we sitll need it to accumulate all of the shares
// We do save the message we signed so any future calls with distinct messages panic
// This assumes the txn deciding this data is committed before the share is broaadcast
if let Some((existing_msg, existing_preprocesses)) =
DataSignedWith::get(self.txn, &self.context)
{
assert_eq!(msg, &existing_msg, "obtaining a signature share for a distinct message");
assert_eq!(
&serialized_preprocesses, &existing_preprocesses,
"obtaining a signature share with a distinct set of preprocesses"
);
} else {
DataSignedWith::set(
self.txn,
&self.context,
&(msg.to_vec(), serialized_preprocesses.clone()),
);
}
// Get the preprocessed machine
let (machine, _) = self.preprocess_internal(participants);
// Deserialize all the preprocesses
let mut participants = serialized_preprocesses.keys().copied().collect::<Vec<_>>();
participants.sort();
let mut preprocesses = HashMap::new();
@@ -170,13 +195,14 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
);
}
// Sign the share
let (machine, share) = machine.sign(preprocesses, msg).map_err(|e| match e {
FrostError::InternalError(e) => unreachable!("FrostError::InternalError {e}"),
FrostError::InvalidParticipant(_, _) |
FrostError::InvalidSigningSet(_) |
FrostError::InvalidParticipantQuantity(_, _) |
FrostError::DuplicatedParticipant(_) |
FrostError::MissingParticipant(_) => unreachable!("{e:?}"),
FrostError::MissingParticipant(_) => panic!("unexpected error during sign: {e:?}"),
FrostError::InvalidPreprocess(p) | FrostError::InvalidShare(p) => p,
})?;
@@ -207,24 +233,24 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
}
// Get the keys of the participants, noted by their threshold is, and return a new map indexed by
// the MuSig is.
// their MuSig is.
fn threshold_i_map_to_keys_and_musig_i_map(
spec: &TributarySpec,
removed: &[<Ristretto as Ciphersuite>::G],
our_key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
mut map: HashMap<Participant, Vec<u8>>,
) -> (Vec<<Ristretto as Ciphersuite>::G>, HashMap<Participant, Vec<u8>>) {
// Insert our own index so calculations aren't offset
let our_threshold_i = spec
.i(removed, <Ristretto as Ciphersuite>::generator() * our_key.deref())
.expect("MuSig t-of-n signing a for a protocol we were removed from")
.i(<Ristretto as Ciphersuite>::generator() * our_key.deref())
.expect("not in a set we're signing for")
.start;
// Asserts we weren't unexpectedly already present
assert!(map.insert(our_threshold_i, vec![]).is_none());
let spec_validators = spec.validators();
let key_from_threshold_i = |threshold_i| {
for (key, _) in &spec_validators {
if threshold_i == spec.i(removed, *key).expect("MuSig t-of-n participant was removed").start {
if threshold_i == spec.i(*key).expect("validator wasn't in a set they're in").start {
return *key;
}
}
@@ -235,29 +261,37 @@ fn threshold_i_map_to_keys_and_musig_i_map(
let mut threshold_is = map.keys().copied().collect::<Vec<_>>();
threshold_is.sort();
for threshold_i in threshold_is {
sorted.push((key_from_threshold_i(threshold_i), map.remove(&threshold_i).unwrap()));
sorted.push((
threshold_i,
key_from_threshold_i(threshold_i),
map.remove(&threshold_i).unwrap(),
));
}
// Now that signers are sorted, with their shares, create a map with the is needed for MuSig
let mut participants = vec![];
let mut map = HashMap::new();
for (raw_i, (key, share)) in sorted.into_iter().enumerate() {
let musig_i = u16::try_from(raw_i).unwrap() + 1;
let mut our_musig_i = None;
for (raw_i, (threshold_i, key, share)) in sorted.into_iter().enumerate() {
let musig_i = Participant::new(u16::try_from(raw_i).unwrap() + 1).unwrap();
if threshold_i == our_threshold_i {
our_musig_i = Some(musig_i);
}
participants.push(key);
map.insert(Participant::new(musig_i).unwrap(), share);
map.insert(musig_i, share);
}
map.remove(&our_threshold_i).unwrap();
map.remove(&our_musig_i.unwrap()).unwrap();
(participants, map)
}
type DkgConfirmerSigningProtocol<'a, T> = SigningProtocol<'a, T, (&'static [u8; 12], u32)>;
type DkgConfirmerSigningProtocol<'a, T> =
SigningProtocol<'a, T, (&'static [u8; 12], ValidatorSet, u32)>;
pub(crate) struct DkgConfirmer<'a, T: DbTxn> {
key: &'a Zeroizing<<Ristretto as Ciphersuite>::F>,
spec: &'a TributarySpec,
removed: Vec<<Ristretto as Ciphersuite>::G>,
txn: &'a mut T,
attempt: u32,
}
@@ -268,19 +302,19 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
spec: &'a TributarySpec,
txn: &'a mut T,
attempt: u32,
) -> Option<DkgConfirmer<'a, T>> {
// This relies on how confirmations are inlined into the DKG protocol and they accordingly
// share attempts
let removed = crate::tributary::removed_as_of_dkg_attempt(txn, spec.genesis(), attempt)?;
Some(DkgConfirmer { key, spec, removed, txn, attempt })
) -> DkgConfirmer<'a, T> {
DkgConfirmer { key, spec, txn, attempt }
}
fn signing_protocol(&mut self) -> DkgConfirmerSigningProtocol<'_, T> {
let context = (b"DkgConfirmer", self.attempt);
let context = (b"DkgConfirmer", self.spec.set(), self.attempt);
SigningProtocol { key: self.key, spec: self.spec, txn: self.txn, context }
}
fn preprocess_internal(&mut self) -> (AlgorithmSignMachine<Ristretto, Schnorrkel>, [u8; 64]) {
let participants = self.spec.validators().iter().map(|val| val.0).collect::<Vec<_>>();
// This preprocesses with just us as we only decide the participants after obtaining
// preprocesses
let participants = vec![<Ristretto as Ciphersuite>::generator() * self.key.deref()];
self.signing_protocol().preprocess_internal(&participants)
}
// Get the preprocess for this confirmation.
@@ -293,14 +327,9 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
preprocesses: HashMap<Participant, Vec<u8>>,
key_pair: &KeyPair,
) -> Result<(AlgorithmSignatureMachine<Ristretto, Schnorrkel>, [u8; 32]), Participant> {
let participants = self.spec.validators().iter().map(|val| val.0).collect::<Vec<_>>();
let preprocesses =
threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, preprocesses).1;
let msg = set_keys_message(
&self.spec.set(),
&self.removed.iter().map(|key| Public(key.to_bytes())).collect::<Vec<_>>(),
key_pair,
);
let (participants, preprocesses) =
threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, preprocesses);
let msg = set_keys_message(&self.spec.set(), key_pair);
self.signing_protocol().share_internal(&participants, preprocesses, &msg)
}
// Get the share for this confirmation, if the preprocesses are valid.
@@ -318,8 +347,9 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
key_pair: &KeyPair,
shares: HashMap<Participant, Vec<u8>>,
) -> Result<[u8; 64], Participant> {
let shares =
threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, shares).1;
assert_eq!(preprocesses.keys().collect::<HashSet<_>>(), shares.keys().collect::<HashSet<_>>());
let shares = threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, shares).1;
let machine = self
.share_internal(preprocesses, key_pair)

View File

@@ -9,7 +9,7 @@ use frost::Participant;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::PublicKey, validator_sets::primitives::ValidatorSet};
use serai_client::validator_sets::primitives::ValidatorSet;
fn borsh_serialize_validators<W: io::Write>(
validators: &Vec<(<Ristretto as Ciphersuite>::G, u16)>,
@@ -49,6 +49,7 @@ pub struct TributarySpec {
deserialize_with = "borsh_deserialize_validators"
)]
validators: Vec<(<Ristretto as Ciphersuite>::G, u16)>,
evrf_public_keys: Vec<([u8; 32], Vec<u8>)>,
}
impl TributarySpec {
@@ -56,16 +57,10 @@ impl TributarySpec {
serai_block: [u8; 32],
start_time: u64,
set: ValidatorSet,
set_participants: Vec<(PublicKey, u16)>,
validators: Vec<(<Ristretto as Ciphersuite>::G, u16)>,
evrf_public_keys: Vec<([u8; 32], Vec<u8>)>,
) -> TributarySpec {
let mut validators = vec![];
for (participant, shares) in set_participants {
let participant = <Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut participant.0.as_ref())
.expect("invalid key registered as participant");
validators.push((participant, shares));
}
Self { serai_block, start_time, set, validators }
Self { serai_block, start_time, set, validators, evrf_public_keys }
}
pub fn set(&self) -> ValidatorSet {
@@ -88,24 +83,15 @@ impl TributarySpec {
self.start_time
}
pub fn n(&self, removed_validators: &[<Ristretto as Ciphersuite>::G]) -> u16 {
self
.validators
.iter()
.map(|(validator, weight)| if removed_validators.contains(validator) { 0 } else { *weight })
.sum()
pub fn n(&self) -> u16 {
self.validators.iter().map(|(_, weight)| *weight).sum()
}
pub fn t(&self) -> u16 {
// t doesn't change with regards to the amount of removed validators
((2 * self.n(&[])) / 3) + 1
((2 * self.n()) / 3) + 1
}
pub fn i(
&self,
removed_validators: &[<Ristretto as Ciphersuite>::G],
key: <Ristretto as Ciphersuite>::G,
) -> Option<Range<Participant>> {
pub fn i(&self, key: <Ristretto as Ciphersuite>::G) -> Option<Range<Participant>> {
let mut all_is = HashMap::new();
let mut i = 1;
for (validator, weight) in &self.validators {
@@ -116,34 +102,12 @@ impl TributarySpec {
i += weight;
}
let original_i = all_is.get(&key)?.clone();
let mut result_i = original_i.clone();
for removed_validator in removed_validators {
let removed_i = all_is
.get(removed_validator)
.expect("removed validator wasn't present in set to begin with");
// If the queried key was removed, return None
if &original_i == removed_i {
return None;
}
// If the removed was before the queried, shift the queried down accordingly
if removed_i.start < original_i.start {
let removed_shares = u16::from(removed_i.end) - u16::from(removed_i.start);
result_i.start = Participant::new(u16::from(original_i.start) - removed_shares).unwrap();
result_i.end = Participant::new(u16::from(original_i.end) - removed_shares).unwrap();
}
}
Some(result_i)
Some(all_is.get(&key)?.clone())
}
pub fn reverse_lookup_i(
&self,
removed_validators: &[<Ristretto as Ciphersuite>::G],
i: Participant,
) -> Option<<Ristretto as Ciphersuite>::G> {
pub fn reverse_lookup_i(&self, i: Participant) -> Option<<Ristretto as Ciphersuite>::G> {
for (validator, _) in &self.validators {
if self.i(removed_validators, *validator).map_or(false, |range| range.contains(&i)) {
if self.i(*validator).map_or(false, |range| range.contains(&i)) {
return Some(*validator);
}
}
@@ -153,4 +117,8 @@ impl TributarySpec {
pub fn validators(&self) -> Vec<(<Ristretto as Ciphersuite>::G, u64)> {
self.validators.iter().map(|(validator, weight)| (*validator, u64::from(*weight))).collect()
}
pub fn evrf_public_keys(&self) -> Vec<([u8; 32], Vec<u8>)> {
self.evrf_public_keys.clone()
}
}

View File

@@ -12,7 +12,6 @@ use ciphersuite::{
Ciphersuite, Ristretto,
};
use schnorr::SchnorrSignature;
use frost::Participant;
use scale::{Encode, Decode};
use processor_messages::coordinator::SubstrateSignableId;
@@ -130,32 +129,26 @@ impl<Id: Clone + PartialEq + Eq + Debug + Encode + Decode> SignData<Id> {
#[derive(Clone, PartialEq, Eq)]
pub enum Transaction {
RemoveParticipantDueToDkg {
RemoveParticipant {
participant: <Ristretto as Ciphersuite>::G,
signed: Signed,
},
DkgCommitments {
attempt: u32,
commitments: Vec<Vec<u8>>,
DkgParticipation {
participation: Vec<u8>,
signed: Signed,
},
DkgShares {
DkgConfirmationNonces {
// The confirmation attempt
attempt: u32,
// Sending Participant, Receiving Participant, Share
shares: Vec<Vec<Vec<u8>>>,
// The nonces for DKG confirmation attempt #attempt
confirmation_nonces: [u8; 64],
signed: Signed,
},
InvalidDkgShare {
attempt: u32,
accuser: Participant,
faulty: Participant,
blame: Option<Vec<u8>>,
signed: Signed,
},
DkgConfirmed {
DkgConfirmationShare {
// The confirmation attempt
attempt: u32,
// The share for DKG confirmation attempt #attempt
confirmation_share: [u8; 32],
signed: Signed,
},
@@ -197,29 +190,22 @@ pub enum Transaction {
impl Debug for Transaction {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
match self {
Transaction::RemoveParticipantDueToDkg { participant, signed } => fmt
.debug_struct("Transaction::RemoveParticipantDueToDkg")
Transaction::RemoveParticipant { participant, signed } => fmt
.debug_struct("Transaction::RemoveParticipant")
.field("participant", &hex::encode(participant.to_bytes()))
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::DkgCommitments { attempt, commitments: _, signed } => fmt
.debug_struct("Transaction::DkgCommitments")
Transaction::DkgParticipation { signed, .. } => fmt
.debug_struct("Transaction::DkgParticipation")
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::DkgConfirmationNonces { attempt, signed, .. } => fmt
.debug_struct("Transaction::DkgConfirmationNonces")
.field("attempt", attempt)
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::DkgShares { attempt, signed, .. } => fmt
.debug_struct("Transaction::DkgShares")
.field("attempt", attempt)
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::InvalidDkgShare { attempt, accuser, faulty, .. } => fmt
.debug_struct("Transaction::InvalidDkgShare")
.field("attempt", attempt)
.field("accuser", accuser)
.field("faulty", faulty)
.finish_non_exhaustive(),
Transaction::DkgConfirmed { attempt, confirmation_share: _, signed } => fmt
.debug_struct("Transaction::DkgConfirmed")
Transaction::DkgConfirmationShare { attempt, signed, .. } => fmt
.debug_struct("Transaction::DkgConfirmationShare")
.field("attempt", attempt)
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
@@ -261,43 +247,32 @@ impl ReadWrite for Transaction {
reader.read_exact(&mut kind)?;
match kind[0] {
0 => Ok(Transaction::RemoveParticipantDueToDkg {
0 => Ok(Transaction::RemoveParticipant {
participant: Ristretto::read_G(reader)?,
signed: Signed::read_without_nonce(reader, 0)?,
}),
1 => {
let mut attempt = [0; 4];
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let participation = {
let mut participation_len = [0; 4];
reader.read_exact(&mut participation_len)?;
let participation_len = u32::from_le_bytes(participation_len);
let commitments = {
let mut commitments_len = [0; 1];
reader.read_exact(&mut commitments_len)?;
let commitments_len = usize::from(commitments_len[0]);
if commitments_len == 0 {
Err(io::Error::other("zero commitments in DkgCommitments"))?;
}
let mut each_commitments_len = [0; 2];
reader.read_exact(&mut each_commitments_len)?;
let each_commitments_len = usize::from(u16::from_le_bytes(each_commitments_len));
if (commitments_len * each_commitments_len) > TRANSACTION_SIZE_LIMIT {
if participation_len > u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() {
Err(io::Error::other(
"commitments present in transaction exceeded transaction size limit",
"participation present in transaction exceeded transaction size limit",
))?;
}
let mut commitments = vec![vec![]; commitments_len];
for commitments in &mut commitments {
*commitments = vec![0; each_commitments_len];
reader.read_exact(commitments)?;
}
commitments
let participation_len = usize::try_from(participation_len).unwrap();
let mut participation = vec![0; participation_len];
reader.read_exact(&mut participation)?;
participation
};
let signed = Signed::read_without_nonce(reader, 0)?;
Ok(Transaction::DkgCommitments { attempt, commitments, signed })
Ok(Transaction::DkgParticipation { participation, signed })
}
2 => {
@@ -305,36 +280,12 @@ impl ReadWrite for Transaction {
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let shares = {
let mut share_quantity = [0; 1];
reader.read_exact(&mut share_quantity)?;
let mut key_share_quantity = [0; 1];
reader.read_exact(&mut key_share_quantity)?;
let mut share_len = [0; 2];
reader.read_exact(&mut share_len)?;
let share_len = usize::from(u16::from_le_bytes(share_len));
let mut all_shares = vec![];
for _ in 0 .. share_quantity[0] {
let mut shares = vec![];
for _ in 0 .. key_share_quantity[0] {
let mut share = vec![0; share_len];
reader.read_exact(&mut share)?;
shares.push(share);
}
all_shares.push(shares);
}
all_shares
};
let mut confirmation_nonces = [0; 64];
reader.read_exact(&mut confirmation_nonces)?;
let signed = Signed::read_without_nonce(reader, 1)?;
let signed = Signed::read_without_nonce(reader, 0)?;
Ok(Transaction::DkgShares { attempt, shares, confirmation_nonces, signed })
Ok(Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed })
}
3 => {
@@ -342,53 +293,21 @@ impl ReadWrite for Transaction {
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let mut accuser = [0; 2];
reader.read_exact(&mut accuser)?;
let accuser = Participant::new(u16::from_le_bytes(accuser))
.ok_or_else(|| io::Error::other("invalid participant in InvalidDkgShare"))?;
let mut faulty = [0; 2];
reader.read_exact(&mut faulty)?;
let faulty = Participant::new(u16::from_le_bytes(faulty))
.ok_or_else(|| io::Error::other("invalid participant in InvalidDkgShare"))?;
let mut blame_len = [0; 2];
reader.read_exact(&mut blame_len)?;
let mut blame = vec![0; u16::from_le_bytes(blame_len).into()];
reader.read_exact(&mut blame)?;
// This shares a nonce with DkgConfirmed as only one is expected
let signed = Signed::read_without_nonce(reader, 2)?;
Ok(Transaction::InvalidDkgShare {
attempt,
accuser,
faulty,
blame: Some(blame).filter(|blame| !blame.is_empty()),
signed,
})
}
4 => {
let mut attempt = [0; 4];
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let mut confirmation_share = [0; 32];
reader.read_exact(&mut confirmation_share)?;
let signed = Signed::read_without_nonce(reader, 2)?;
let signed = Signed::read_without_nonce(reader, 1)?;
Ok(Transaction::DkgConfirmed { attempt, confirmation_share, signed })
Ok(Transaction::DkgConfirmationShare { attempt, confirmation_share, signed })
}
5 => {
4 => {
let mut block = [0; 32];
reader.read_exact(&mut block)?;
Ok(Transaction::CosignSubstrateBlock(block))
}
6 => {
5 => {
let mut block = [0; 32];
reader.read_exact(&mut block)?;
let mut batch = [0; 4];
@@ -396,16 +315,16 @@ impl ReadWrite for Transaction {
Ok(Transaction::Batch { block, batch: u32::from_le_bytes(batch) })
}
7 => {
6 => {
let mut block = [0; 8];
reader.read_exact(&mut block)?;
Ok(Transaction::SubstrateBlock(u64::from_le_bytes(block)))
}
8 => SignData::read(reader).map(Transaction::SubstrateSign),
9 => SignData::read(reader).map(Transaction::Sign),
7 => SignData::read(reader).map(Transaction::SubstrateSign),
8 => SignData::read(reader).map(Transaction::Sign),
10 => {
9 => {
let mut plan = [0; 32];
reader.read_exact(&mut plan)?;
@@ -420,7 +339,7 @@ impl ReadWrite for Transaction {
Ok(Transaction::SignCompleted { plan, tx_hash, first_signer, signature })
}
11 => {
10 => {
let mut len = [0];
reader.read_exact(&mut len)?;
let len = len[0];
@@ -445,109 +364,59 @@ impl ReadWrite for Transaction {
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self {
Transaction::RemoveParticipantDueToDkg { participant, signed } => {
Transaction::RemoveParticipant { participant, signed } => {
writer.write_all(&[0])?;
writer.write_all(&participant.to_bytes())?;
signed.write_without_nonce(writer)
}
Transaction::DkgCommitments { attempt, commitments, signed } => {
Transaction::DkgParticipation { participation, signed } => {
writer.write_all(&[1])?;
writer.write_all(&attempt.to_le_bytes())?;
if commitments.is_empty() {
Err(io::Error::other("zero commitments in DkgCommitments"))?
}
writer.write_all(&[u8::try_from(commitments.len()).unwrap()])?;
for commitments_i in commitments {
if commitments_i.len() != commitments[0].len() {
Err(io::Error::other("commitments of differing sizes in DkgCommitments"))?
}
}
writer.write_all(&u16::try_from(commitments[0].len()).unwrap().to_le_bytes())?;
for commitments in commitments {
writer.write_all(commitments)?;
}
writer.write_all(&u32::try_from(participation.len()).unwrap().to_le_bytes())?;
writer.write_all(participation)?;
signed.write_without_nonce(writer)
}
Transaction::DkgShares { attempt, shares, confirmation_nonces, signed } => {
Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => {
writer.write_all(&[2])?;
writer.write_all(&attempt.to_le_bytes())?;
// `shares` is a Vec which is supposed to map to a HashMap<Participant, Vec<u8>>. Since we
// bound participants to 150, this conversion is safe if a valid in-memory transaction.
writer.write_all(&[u8::try_from(shares.len()).unwrap()])?;
// This assumes at least one share is being sent to another party
writer.write_all(&[u8::try_from(shares[0].len()).unwrap()])?;
let share_len = shares[0][0].len();
// For BLS12-381 G2, this would be:
// - A 32-byte share
// - A 96-byte ephemeral key
// - A 128-byte signature
// Hence why this has to be u16
writer.write_all(&u16::try_from(share_len).unwrap().to_le_bytes())?;
for these_shares in shares {
assert_eq!(these_shares.len(), shares[0].len(), "amount of sent shares was variable");
for share in these_shares {
assert_eq!(share.len(), share_len, "sent shares were of variable length");
writer.write_all(share)?;
}
}
writer.write_all(confirmation_nonces)?;
signed.write_without_nonce(writer)
}
Transaction::InvalidDkgShare { attempt, accuser, faulty, blame, signed } => {
Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => {
writer.write_all(&[3])?;
writer.write_all(&attempt.to_le_bytes())?;
writer.write_all(&u16::from(*accuser).to_le_bytes())?;
writer.write_all(&u16::from(*faulty).to_le_bytes())?;
// Flattens Some(vec![]) to None on the expectation no actual blame will be 0-length
assert!(blame.as_ref().map_or(1, Vec::len) != 0);
let blame_len =
u16::try_from(blame.as_ref().unwrap_or(&vec![]).len()).expect("blame exceeded 64 KB");
writer.write_all(&blame_len.to_le_bytes())?;
writer.write_all(blame.as_ref().unwrap_or(&vec![]))?;
signed.write_without_nonce(writer)
}
Transaction::DkgConfirmed { attempt, confirmation_share, signed } => {
writer.write_all(&[4])?;
writer.write_all(&attempt.to_le_bytes())?;
writer.write_all(confirmation_share)?;
signed.write_without_nonce(writer)
}
Transaction::CosignSubstrateBlock(block) => {
writer.write_all(&[5])?;
writer.write_all(&[4])?;
writer.write_all(block)
}
Transaction::Batch { block, batch } => {
writer.write_all(&[6])?;
writer.write_all(&[5])?;
writer.write_all(block)?;
writer.write_all(&batch.to_le_bytes())
}
Transaction::SubstrateBlock(block) => {
writer.write_all(&[7])?;
writer.write_all(&[6])?;
writer.write_all(&block.to_le_bytes())
}
Transaction::SubstrateSign(data) => {
writer.write_all(&[8])?;
writer.write_all(&[7])?;
data.write(writer)
}
Transaction::Sign(data) => {
writer.write_all(&[9])?;
writer.write_all(&[8])?;
data.write(writer)
}
Transaction::SignCompleted { plan, tx_hash, first_signer, signature } => {
writer.write_all(&[10])?;
writer.write_all(&[9])?;
writer.write_all(plan)?;
writer
.write_all(&[u8::try_from(tx_hash.len()).expect("tx hash length exceed 255 bytes")])?;
@@ -556,7 +425,7 @@ impl ReadWrite for Transaction {
signature.write(writer)
}
Transaction::SlashReport(points, signed) => {
writer.write_all(&[11])?;
writer.write_all(&[10])?;
writer.write_all(&[u8::try_from(points.len()).unwrap()])?;
for points in points {
writer.write_all(&points.to_le_bytes())?;
@@ -570,15 +439,16 @@ impl ReadWrite for Transaction {
impl TransactionTrait for Transaction {
fn kind(&self) -> TransactionKind<'_> {
match self {
Transaction::RemoveParticipantDueToDkg { participant, signed } => {
Transaction::RemoveParticipant { participant, signed } => {
TransactionKind::Signed((b"remove", participant.to_bytes()).encode(), signed)
}
Transaction::DkgCommitments { attempt, commitments: _, signed } |
Transaction::DkgShares { attempt, signed, .. } |
Transaction::InvalidDkgShare { attempt, signed, .. } |
Transaction::DkgConfirmed { attempt, signed, .. } => {
TransactionKind::Signed((b"dkg", attempt).encode(), signed)
Transaction::DkgParticipation { signed, .. } => {
TransactionKind::Signed(b"dkg".to_vec(), signed)
}
Transaction::DkgConfirmationNonces { attempt, signed, .. } |
Transaction::DkgConfirmationShare { attempt, signed, .. } => {
TransactionKind::Signed((b"dkg_confirmation", attempt).encode(), signed)
}
Transaction::CosignSubstrateBlock(_) => TransactionKind::Provided("cosign"),
@@ -645,11 +515,14 @@ impl Transaction {
fn signed(tx: &mut Transaction) -> (u32, &mut Signed) {
#[allow(clippy::match_same_arms)] // Doesn't make semantic sense here
let nonce = match tx {
Transaction::RemoveParticipantDueToDkg { .. } => 0,
Transaction::RemoveParticipant { .. } => 0,
Transaction::DkgCommitments { .. } => 0,
Transaction::DkgShares { .. } => 1,
Transaction::InvalidDkgShare { .. } | Transaction::DkgConfirmed { .. } => 2,
Transaction::DkgParticipation { .. } => 0,
// Uses a nonce of 0 as it has an internal attempt counter we distinguish by
Transaction::DkgConfirmationNonces { .. } => 0,
// Uses a nonce of 1 due to internal attempt counter and due to following
// DkgConfirmationNonces
Transaction::DkgConfirmationShare { .. } => 1,
Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"),
@@ -668,11 +541,10 @@ impl Transaction {
nonce,
#[allow(clippy::match_same_arms)]
match tx {
Transaction::RemoveParticipantDueToDkg { ref mut signed, .. } |
Transaction::DkgCommitments { ref mut signed, .. } |
Transaction::DkgShares { ref mut signed, .. } |
Transaction::InvalidDkgShare { ref mut signed, .. } |
Transaction::DkgConfirmed { ref mut signed, .. } => signed,
Transaction::RemoveParticipant { ref mut signed, .. } |
Transaction::DkgParticipation { ref mut signed, .. } |
Transaction::DkgConfirmationNonces { ref mut signed, .. } => signed,
Transaction::DkgConfirmationShare { ref mut signed, .. } => signed,
Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"),

View File

@@ -6,6 +6,7 @@ license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tributary"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
@@ -16,7 +17,7 @@ workspace = true
[dependencies]
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", default-features = false }
thiserror = { version = "2", default-features = false, features = ["std"] }
subtle = { version = "^2", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std"] }

View File

@@ -50,13 +50,17 @@ pub(crate) use crate::tendermint::*;
pub mod tests;
/// Size limit for an individual transaction.
pub const TRANSACTION_SIZE_LIMIT: usize = 3_000_000;
// This needs to be big enough to participate in a 101-of-150 eVRF DKG with each element taking
// `MAX_KEY_LEN`. This also needs to be big enough to pariticpate in signing 520 Bitcoin inputs
// with 49 key shares, and signing 120 Monero inputs with 49 key shares.
// TODO: Add a test for these properties
pub const TRANSACTION_SIZE_LIMIT: usize = 2_000_000;
/// Amount of transactions a single account may have in the mempool.
pub const ACCOUNT_MEMPOOL_LIMIT: u32 = 50;
/// Block size limit.
// This targets a growth limit of roughly 45 GB a day, under load, in order to prevent a malicious
// This targets a growth limit of roughly 30 GB a day, under load, in order to prevent a malicious
// participant from flooding disks and causing out of space errors in order processes.
pub const BLOCK_SIZE_LIMIT: usize = 3_001_000;
pub const BLOCK_SIZE_LIMIT: usize = 2_001_000;
pub(crate) const TENDERMINT_MESSAGE: u8 = 0;
pub(crate) const TRANSACTION_MESSAGE: u8 = 1;

View File

@@ -6,6 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
@@ -16,7 +17,7 @@ workspace = true
[dependencies]
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", default-features = false }
thiserror = { version = "2", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
log = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.74"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dalek-ff-gr
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["curve25519", "ed25519", "ristretto", "dalek", "group"]
edition = "2021"
rust-version = "1.66"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true

View File

@@ -35,7 +35,7 @@ impl_modulus!(
type ResidueType = Residue<FieldModulus, { FieldModulus::LIMBS }>;
/// A constant-time implementation of the Ed25519 field.
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug, Zeroize)]
pub struct FieldElement(ResidueType);
// Square root of -1.

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.79"
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
@@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
thiserror = { version = "1", default-features = false, optional = true }
thiserror = { version = "2", default-features = false }
rand_core = { version = "0.6", default-features = false }
@@ -36,13 +36,29 @@ multiexp = { path = "../multiexp", version = "0.4", default-features = false }
schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false }
dleq = { path = "../dleq", version = "^0.4.1", default-features = false }
# eVRF DKG dependencies
generic-array = { version = "1", default-features = false, features = ["alloc"], optional = true }
blake2 = { version = "0.10", default-features = false, features = ["std"], optional = true }
rand_chacha = { version = "0.3", default-features = false, features = ["std"], optional = true }
generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", default-features = false, optional = true }
ec-divisors = { path = "../evrf/divisors", default-features = false, optional = true }
generalized-bulletproofs-circuit-abstraction = { path = "../evrf/circuit-abstraction", optional = true }
generalized-bulletproofs-ec-gadgets = { path = "../evrf/ec-gadgets", optional = true }
secq256k1 = { path = "../evrf/secq256k1", optional = true }
embedwards25519 = { path = "../evrf/embedwards25519", optional = true }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
rand = { version = "0.8", default-features = false, features = ["std"] }
ciphersuite = { path = "../ciphersuite", default-features = false, features = ["ristretto"] }
generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", features = ["tests"] }
ec-divisors = { path = "../evrf/divisors", features = ["pasta"] }
pasta_curves = "0.5"
[features]
std = [
"thiserror",
"thiserror/std",
"rand_core/std",
@@ -62,5 +78,21 @@ std = [
"dleq/serialize"
]
borsh = ["dep:borsh"]
evrf = [
"std",
"dep:generic-array",
"dep:blake2",
"dep:rand_chacha",
"dep:generalized-bulletproofs",
"dep:ec-divisors",
"dep:generalized-bulletproofs-circuit-abstraction",
"dep:generalized-bulletproofs-ec-gadgets",
]
evrf-secp256k1 = ["evrf", "ciphersuite/secp256k1", "secq256k1"]
evrf-ed25519 = ["evrf", "ciphersuite/ed25519", "embedwards25519"]
evrf-ristretto = ["evrf", "ciphersuite/ristretto", "embedwards25519"]
tests = ["rand_core/getrandom"]
default = ["std"]

View File

@@ -98,11 +98,11 @@ fn ecdh<C: Ciphersuite>(private: &Zeroizing<C::F>, public: C::G) -> Zeroizing<C:
// Each ecdh must be distinct. Reuse of an ecdh for multiple ciphers will cause the messages to be
// leaked.
fn cipher<C: Ciphersuite>(context: &str, ecdh: &Zeroizing<C::G>) -> ChaCha20 {
fn cipher<C: Ciphersuite>(context: [u8; 32], ecdh: &Zeroizing<C::G>) -> ChaCha20 {
// Ideally, we'd box this transcript with ZAlloc, yet that's only possible on nightly
// TODO: https://github.com/serai-dex/serai/issues/151
let mut transcript = RecommendedTranscript::new(b"DKG Encryption v0.2");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript.domain_separate(b"encryption_key");
@@ -134,7 +134,7 @@ fn cipher<C: Ciphersuite>(context: &str, ecdh: &Zeroizing<C::G>) -> ChaCha20 {
fn encrypt<R: RngCore + CryptoRng, C: Ciphersuite, E: Encryptable>(
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
to: C::G,
mut msg: Zeroizing<E>,
@@ -197,7 +197,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
pub(crate) fn invalidate_msg<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
) {
// Invalidate the message by specifying a new key/Schnorr PoP
@@ -219,7 +219,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
pub(crate) fn invalidate_share_serialization<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
to: C::G,
) {
@@ -243,7 +243,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
pub(crate) fn invalidate_share_value<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
to: C::G,
) {
@@ -300,14 +300,14 @@ impl<C: Ciphersuite> EncryptionKeyProof<C> {
// This still doesn't mean the DKG offers an authenticated channel. The per-message keys have no
// root of trust other than their existence in the assumed-to-exist external authenticated channel.
fn pop_challenge<C: Ciphersuite>(
context: &str,
context: [u8; 32],
nonce: C::G,
key: C::G,
sender: Participant,
msg: &[u8],
) -> C::F {
let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Proof of Possession v0.2");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript.domain_separate(b"proof_of_possession");
@@ -323,9 +323,9 @@ fn pop_challenge<C: Ciphersuite>(
C::hash_to_F(b"DKG-encryption-proof_of_possession", &transcript.challenge(b"schnorr"))
}
fn encryption_key_transcript(context: &str) -> RecommendedTranscript {
fn encryption_key_transcript(context: [u8; 32]) -> RecommendedTranscript {
let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Correctness Proof v0.2");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript
}
@@ -337,58 +337,17 @@ pub(crate) enum DecryptionError {
InvalidProof,
}
// A simple box for managing encryption.
#[derive(Clone)]
pub(crate) struct Encryption<C: Ciphersuite> {
context: String,
i: Option<Participant>,
enc_key: Zeroizing<C::F>,
enc_pub_key: C::G,
// A simple box for managing decryption.
#[derive(Clone, Debug)]
pub(crate) struct Decryption<C: Ciphersuite> {
context: [u8; 32],
enc_keys: HashMap<Participant, C::G>,
}
impl<C: Ciphersuite> fmt::Debug for Encryption<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("Encryption")
.field("context", &self.context)
.field("i", &self.i)
.field("enc_pub_key", &self.enc_pub_key)
.field("enc_keys", &self.enc_keys)
.finish_non_exhaustive()
impl<C: Ciphersuite> Decryption<C> {
pub(crate) fn new(context: [u8; 32]) -> Self {
Self { context, enc_keys: HashMap::new() }
}
}
impl<C: Ciphersuite> Zeroize for Encryption<C> {
fn zeroize(&mut self) {
self.enc_key.zeroize();
self.enc_pub_key.zeroize();
for (_, mut value) in self.enc_keys.drain() {
value.zeroize();
}
}
}
impl<C: Ciphersuite> Encryption<C> {
pub(crate) fn new<R: RngCore + CryptoRng>(
context: String,
i: Option<Participant>,
rng: &mut R,
) -> Self {
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
Self {
context,
i,
enc_pub_key: C::generator() * enc_key.deref(),
enc_key,
enc_keys: HashMap::new(),
}
}
pub(crate) fn registration<M: Message>(&self, msg: M) -> EncryptionKeyMessage<C, M> {
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
}
pub(crate) fn register<M: Message>(
&mut self,
participant: Participant,
@@ -402,13 +361,109 @@ impl<C: Ciphersuite> Encryption<C> {
msg.msg
}
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
// Returns None if the key was wrong.
pub(crate) fn decrypt_with_proof<E: Encryptable>(
&self,
from: Participant,
decryptor: Participant,
mut msg: EncryptedMessage<C, E>,
// There's no encryption key proof if the accusation is of an invalid signature
proof: Option<EncryptionKeyProof<C>>,
) -> Result<Zeroizing<E>, DecryptionError> {
if !msg.pop.verify(
msg.key,
pop_challenge::<C>(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
) {
Err(DecryptionError::InvalidSignature)?;
}
if let Some(proof) = proof {
// Verify this is the decryption key for this message
proof
.dleq
.verify(
&mut encryption_key_transcript(self.context),
&[C::generator(), msg.key],
&[self.enc_keys[&decryptor], *proof.key],
)
.map_err(|_| DecryptionError::InvalidProof)?;
cipher::<C>(self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut());
Ok(msg.msg)
} else {
Err(DecryptionError::InvalidProof)
}
}
}
// A simple box for managing encryption.
#[derive(Clone)]
pub(crate) struct Encryption<C: Ciphersuite> {
context: [u8; 32],
i: Participant,
enc_key: Zeroizing<C::F>,
enc_pub_key: C::G,
decryption: Decryption<C>,
}
impl<C: Ciphersuite> fmt::Debug for Encryption<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("Encryption")
.field("context", &self.context)
.field("i", &self.i)
.field("enc_pub_key", &self.enc_pub_key)
.field("decryption", &self.decryption)
.finish_non_exhaustive()
}
}
impl<C: Ciphersuite> Zeroize for Encryption<C> {
fn zeroize(&mut self) {
self.enc_key.zeroize();
self.enc_pub_key.zeroize();
for (_, mut value) in self.decryption.enc_keys.drain() {
value.zeroize();
}
}
}
impl<C: Ciphersuite> Encryption<C> {
pub(crate) fn new<R: RngCore + CryptoRng>(
context: [u8; 32],
i: Participant,
rng: &mut R,
) -> Self {
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
Self {
context,
i,
enc_pub_key: C::generator() * enc_key.deref(),
enc_key,
decryption: Decryption::new(context),
}
}
pub(crate) fn registration<M: Message>(&self, msg: M) -> EncryptionKeyMessage<C, M> {
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
}
pub(crate) fn register<M: Message>(
&mut self,
participant: Participant,
msg: EncryptionKeyMessage<C, M>,
) -> M {
self.decryption.register(participant, msg)
}
pub(crate) fn encrypt<R: RngCore + CryptoRng, E: Encryptable>(
&self,
rng: &mut R,
participant: Participant,
msg: Zeroizing<E>,
) -> EncryptedMessage<C, E> {
encrypt(rng, &self.context, self.i.unwrap(), self.enc_keys[&participant], msg)
encrypt(rng, self.context, self.i, self.decryption.enc_keys[&participant], msg)
}
pub(crate) fn decrypt<R: RngCore + CryptoRng, I: Copy + Zeroize, E: Encryptable>(
@@ -426,18 +481,18 @@ impl<C: Ciphersuite> Encryption<C> {
batch,
batch_id,
msg.key,
pop_challenge::<C>(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
pop_challenge::<C>(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
);
let key = ecdh::<C>(&self.enc_key, msg.key);
cipher::<C>(&self.context, &key).apply_keystream(msg.msg.as_mut().as_mut());
cipher::<C>(self.context, &key).apply_keystream(msg.msg.as_mut().as_mut());
(
msg.msg,
EncryptionKeyProof {
key,
dleq: DLEqProof::prove(
rng,
&mut encryption_key_transcript(&self.context),
&mut encryption_key_transcript(self.context),
&[C::generator(), msg.key],
&self.enc_key,
),
@@ -445,38 +500,7 @@ impl<C: Ciphersuite> Encryption<C> {
)
}
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
// Returns None if the key was wrong.
pub(crate) fn decrypt_with_proof<E: Encryptable>(
&self,
from: Participant,
decryptor: Participant,
mut msg: EncryptedMessage<C, E>,
// There's no encryption key proof if the accusation is of an invalid signature
proof: Option<EncryptionKeyProof<C>>,
) -> Result<Zeroizing<E>, DecryptionError> {
if !msg.pop.verify(
msg.key,
pop_challenge::<C>(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
) {
Err(DecryptionError::InvalidSignature)?;
}
if let Some(proof) = proof {
// Verify this is the decryption key for this message
proof
.dleq
.verify(
&mut encryption_key_transcript(&self.context),
&[C::generator(), msg.key],
&[self.enc_keys[&decryptor], *proof.key],
)
.map_err(|_| DecryptionError::InvalidProof)?;
cipher::<C>(&self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut());
Ok(msg.msg)
} else {
Err(DecryptionError::InvalidProof)
}
pub(crate) fn into_decryption(self) -> Decryption<C> {
self.decryption
}
}

584
crypto/dkg/src/evrf/mod.rs Normal file
View File

@@ -0,0 +1,584 @@
/*
We implement a DKG using an eVRF, as detailed in the eVRF paper. For the eVRF itself, we do not
use a Paillier-based construction, nor the detailed construction premised on a Bulletproof.
For reference, the detailed construction premised on a Bulletproof involves two curves, notated
here as `C` and `E`, where the scalar field of `C` is the field of `E`. Accordingly, Bulletproofs
over `C` can efficiently perform group operations of points of curve `E`. Each participant has a
private point (`P_i`) on curve `E` committed to over curve `C`. The eVRF selects a pair of
scalars `a, b`, where the participant proves in-Bulletproof the points `A_i, B_i` are
`a * P_i, b * P_i`. The eVRF proceeds to commit to `A_i.x + B_i.x` in a Pedersen Commitment.
Our eVRF uses
[Generalized Bulletproofs](
https://repo.getmonero.org/monero-project/ccs-proposals
/uploads/a9baa50c38c6312efc0fea5c6a188bb9/gbp.pdf
).
This allows us much larger witnesses without growing the reference string, and enables us to
efficiently sample challenges off in-circuit variables (via placing the variables in a vector
commitment, then challenging from a transcript of the commitments). We proceed to use
[elliptic curve divisors](
https://repo.getmonero.org/-/project/54/
uploads/eb1bf5b4d4855a3480c38abf895bd8e8/Veridise_Divisor_Proofs.pdf
)
(which require the ability to sample a challenge off in-circuit variables) to prove discrete
logarithms efficiently.
This is done via having a private scalar (`p_i`) on curve `E`, not a private point, and
publishing the public key for it (`P_i = p_i * G`, where `G` is a generator of `E`). The eVRF
samples two points with unknown discrete logarithms `A, B`, and the circuit proves a Pedersen
Commitment commits to `(p_i * A).x + (p_i * B).x`.
With the eVRF established, we now detail our other novel aspect. The eVRF paper expects secret
shares to be sent to the other parties yet does not detail a precise way to do so. If we
encrypted the secret shares with some stream cipher, each recipient would have to attest validity
or accuse the sender of impropriety. We want an encryption scheme where anyone can verify the
secret shares were encrypted properly, without additional info, efficiently.
Please note from the published commitments, it's possible to calculcate a commitment to the
secret share each party should receive (`V_i`).
We have the sender sample two scalars per recipient, denoted `x_i, y_i` (where `i` is the
recipient index). They perform the eVRF to prove a Pedersen Commitment commits to
`z_i = (x_i * P_i).x + (y_i * P_i).x` and `x_i, y_i` are the discrete logarithms of `X_i, Y_i`
over `G`. They then publish the encrypted share `s_i + z_i` and `X_i, Y_i`.
The recipient is able to decrypt the share via calculating
`s_i - ((p_i * X_i).x + (p_i * Y_i).x)`.
To verify the secret share, we have the `F` terms of the Pedersen Commitments revealed (where
`F, H` are generators of `C`, `F` is used for binding and `H` for blinding). This already needs
to be done for the eVRF outputs used within the DKG, in order to obtain thecommitments to the
coefficients. When we have the commitment `Z_i = ((p_i * A).x + (p_i * B).x) * F`, we simply
check `s_i * F = Z_i + V_i`.
In order to open the Pedersen Commitments to their `F` terms, we transcript the commitments and
the claimed openings, then assign random weights to each pair of `(commitment, opening). The
prover proves knowledge of the discrete logarithm of the sum weighted commitments, minus the sum
sum weighted openings, over `H`.
The benefit to this construction is that given an broadcast channel which is reliable and
ordered, only `t` messages must be broadcast from honest parties in order to create a `t`-of-`n`
multisig. If the encrypted secret shares were not verifiable, one would need at least `t + n`
messages to ensure every participant has a correct dealing and can participate in future
reconstructions of the secret. This would also require all `n` parties be online, whereas this is
robust to threshold `t`.
*/
use core::ops::Deref;
use std::{
io::{self, Read, Write},
collections::{HashSet, HashMap},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, Zeroizing};
use blake2::{Digest, Blake2s256};
use ciphersuite::{
group::{
ff::{Field, PrimeField},
Group, GroupEncoding,
},
Ciphersuite,
};
use multiexp::multiexp_vartime;
use generalized_bulletproofs::arithmetic_circuit_proof::*;
use ec_divisors::DivisorCurve;
use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore, ThresholdKeys};
pub(crate) mod proof;
use proof::*;
pub use proof::{EvrfCurve, EvrfGenerators};
/// Participation in the DKG.
///
/// `Participation` is meant to be broadcast to all other participants over an authenticated,
/// reliable broadcast channel.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Participation<C: Ciphersuite> {
proof: Vec<u8>,
encrypted_secret_shares: HashMap<Participant, C::F>,
}
impl<C: Ciphersuite> Participation<C> {
pub fn read<R: Read>(reader: &mut R, n: u16) -> io::Result<Self> {
// TODO: Replace `len` with some calculcation deterministic to the params
let mut len = [0; 4];
reader.read_exact(&mut len)?;
let len = usize::try_from(u32::from_le_bytes(len)).expect("<32-bit platform?");
// Don't allocate a buffer for the claimed length
// Read chunks until we reach the claimed length
// This means if we were told to read GB, we must actually be sent GB before allocating as such
const CHUNK_SIZE: usize = 1024;
let mut proof = Vec::with_capacity(len.min(CHUNK_SIZE));
while proof.len() < len {
let next_chunk = (len - proof.len()).min(CHUNK_SIZE);
let old_proof_len = proof.len();
proof.resize(old_proof_len + next_chunk, 0);
reader.read_exact(&mut proof[old_proof_len ..])?;
}
let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
for i in (1 ..= n).map(Participant) {
encrypted_secret_shares.insert(i, C::read_F(reader)?);
}
Ok(Self { proof, encrypted_secret_shares })
}
pub fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(&u32::try_from(self.proof.len()).unwrap().to_le_bytes())?;
writer.write_all(&self.proof)?;
for i in (1 ..= u16::try_from(self.encrypted_secret_shares.len())
.expect("writing a Participation which has a n > u16::MAX"))
.map(Participant)
{
writer.write_all(self.encrypted_secret_shares[&i].to_repr().as_ref())?;
}
Ok(())
}
}
fn polynomial<F: PrimeField + Zeroize>(
coefficients: &[Zeroizing<F>],
l: Participant,
) -> Zeroizing<F> {
let l = F::from(u64::from(u16::from(l)));
// This should never be reached since Participant is explicitly non-zero
assert!(l != F::ZERO, "zero participant passed to polynomial");
let mut share = Zeroizing::new(F::ZERO);
for (idx, coefficient) in coefficients.iter().rev().enumerate() {
*share += coefficient.deref();
if idx != (coefficients.len() - 1) {
*share *= l;
}
}
share
}
#[allow(clippy::type_complexity)]
fn share_verification_statements<C: Ciphersuite>(
rng: &mut (impl RngCore + CryptoRng),
commitments: &[C::G],
n: u16,
encryption_commitments: &[C::G],
encrypted_secret_shares: &HashMap<Participant, C::F>,
) -> (C::F, Vec<(C::F, C::G)>) {
debug_assert_eq!(usize::from(n), encryption_commitments.len());
debug_assert_eq!(usize::from(n), encrypted_secret_shares.len());
let mut g_scalar = C::F::ZERO;
let mut pairs = Vec::with_capacity(commitments.len() + encryption_commitments.len());
for commitment in commitments {
pairs.push((C::F::ZERO, *commitment));
}
let mut weight;
for (i, enc_share) in encrypted_secret_shares {
let enc_commitment = encryption_commitments[usize::from(u16::from(*i)) - 1];
weight = C::F::random(&mut *rng);
// s_i F
g_scalar += weight * enc_share;
// - Z_i
let weight = -weight;
pairs.push((weight, enc_commitment));
// - V_i
{
let i = C::F::from(u64::from(u16::from(*i)));
// The first `commitments.len()` pairs are for the commitments
(0 .. commitments.len()).fold(weight, |exp, j| {
pairs[j].0 += exp;
exp * i
});
}
}
(g_scalar, pairs)
}
/// Errors from the eVRF DKG.
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
pub enum EvrfError {
#[error("n, the amount of participants, exceeded a u16")]
TooManyParticipants,
#[error("the threshold t wasn't in range 1 <= t <= n")]
InvalidThreshold,
#[error("a public key was the identity point")]
PublicKeyWasIdentity,
#[error("participating in a DKG we aren't a participant in")]
NotAParticipant,
#[error("a participant with an unrecognized ID participated")]
NonExistentParticipant,
#[error("the passed in generators did not have enough generators for this DKG")]
NotEnoughGenerators,
}
/// The result of calling EvrfDkg::verify.
pub enum VerifyResult<C: EvrfCurve> {
Valid(EvrfDkg<C>),
Invalid(Vec<Participant>),
NotEnoughParticipants,
}
/// Struct to perform/verify the DKG with.
#[derive(Debug)]
pub struct EvrfDkg<C: EvrfCurve> {
t: u16,
n: u16,
evrf_public_keys: Vec<<C::EmbeddedCurve as Ciphersuite>::G>,
group_key: C::G,
verification_shares: HashMap<Participant, C::G>,
#[allow(clippy::type_complexity)]
encrypted_secret_shares:
HashMap<Participant, HashMap<Participant, ([<C::EmbeddedCurve as Ciphersuite>::G; 2], C::F)>>,
}
impl<C: EvrfCurve> EvrfDkg<C> {
// Form the initial transcript for the proofs.
fn initial_transcript(
invocation: [u8; 32],
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
t: u16,
) -> [u8; 32] {
let mut transcript = Blake2s256::new();
transcript.update(invocation);
for key in evrf_public_keys {
transcript.update(key.to_bytes().as_ref());
}
transcript.update(t.to_le_bytes());
transcript.finalize().into()
}
/// Participate in performing the DKG for the specified parameters.
///
/// The context MUST be unique across invocations. Reuse of context will lead to sharing
/// prior-shared secrets.
///
/// Public keys are not allowed to be the identity point. This will error if any are.
pub fn participate(
rng: &mut (impl RngCore + CryptoRng),
generators: &EvrfGenerators<C>,
context: [u8; 32],
t: u16,
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
) -> Result<Participation<C>, EvrfError> {
let Ok(n) = u16::try_from(evrf_public_keys.len()) else { Err(EvrfError::TooManyParticipants)? };
if (t == 0) || (t > n) {
Err(EvrfError::InvalidThreshold)?;
}
if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) {
Err(EvrfError::PublicKeyWasIdentity)?;
};
let evrf_public_key = <C::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref();
if !evrf_public_keys.iter().any(|key| *key == evrf_public_key) {
Err(EvrfError::NotAParticipant)?;
};
let transcript = Self::initial_transcript(context, evrf_public_keys, t);
// Further bind to the participant index so each index gets unique generators
// This allows reusing eVRF public keys as the prover
let mut per_proof_transcript = Blake2s256::new();
per_proof_transcript.update(transcript);
per_proof_transcript.update(evrf_public_key.to_bytes());
// The above transcript is expected to be binding to all arguments here
// The generators are constant to this ciphersuite's generator, and the parameters are
// transcripted
let EvrfProveResult { coefficients, encryption_masks, proof } = match Evrf::prove(
rng,
&generators.0,
per_proof_transcript.finalize().into(),
usize::from(t),
evrf_public_keys,
evrf_private_key,
) {
Ok(res) => res,
Err(AcError::NotEnoughGenerators) => Err(EvrfError::NotEnoughGenerators)?,
Err(
AcError::DifferingLrLengths |
AcError::InconsistentAmountOfConstraints |
AcError::ConstrainedNonExistentTerm |
AcError::ConstrainedNonExistentCommitment |
AcError::InconsistentWitness |
AcError::Ip(_) |
AcError::IncompleteProof,
) => {
panic!("failed to prove for the eVRF proof")
}
};
let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
for (l, encryption_mask) in (1 ..= n).map(Participant).zip(encryption_masks) {
let share = polynomial::<C::F>(&coefficients, l);
encrypted_secret_shares.insert(l, *share + *encryption_mask);
}
Ok(Participation { proof, encrypted_secret_shares })
}
/// Check if a batch of `Participation`s are valid.
///
/// If any `Participation` is invalid, the list of all invalid participants will be returned.
/// If all `Participation`s are valid and there's at least `t`, an instance of this struct
/// (usable to obtain a threshold share of generated key) is returned. If all are valid and
/// there's not at least `t`, `VerifyResult::NotEnoughParticipants` is returned.
///
/// This DKG is unbiased if all `n` people participate. This DKG is biased if only a threshold
/// participate.
pub fn verify(
rng: &mut (impl RngCore + CryptoRng),
generators: &EvrfGenerators<C>,
context: [u8; 32],
t: u16,
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
participations: &HashMap<Participant, Participation<C>>,
) -> Result<VerifyResult<C>, EvrfError> {
let Ok(n) = u16::try_from(evrf_public_keys.len()) else { Err(EvrfError::TooManyParticipants)? };
if (t == 0) || (t > n) {
Err(EvrfError::InvalidThreshold)?;
}
if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) {
Err(EvrfError::PublicKeyWasIdentity)?;
};
for i in participations.keys() {
if u16::from(*i) > n {
Err(EvrfError::NonExistentParticipant)?;
}
}
let mut valid = HashMap::with_capacity(participations.len());
let mut faulty = HashSet::new();
let transcript = Self::initial_transcript(context, evrf_public_keys, t);
let mut evrf_verifier = generators.0.batch_verifier();
for (i, participation) in participations {
let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1];
let mut per_proof_transcript = Blake2s256::new();
per_proof_transcript.update(transcript);
per_proof_transcript.update(evrf_public_key.to_bytes());
// Clone the verifier so if this proof is faulty, it doesn't corrupt the verifier
let mut verifier_clone = evrf_verifier.clone();
let Ok(data) = Evrf::<C>::verify(
rng,
&generators.0,
&mut verifier_clone,
per_proof_transcript.finalize().into(),
usize::from(t),
evrf_public_keys,
evrf_public_key,
&participation.proof,
) else {
faulty.insert(*i);
continue;
};
evrf_verifier = verifier_clone;
valid.insert(*i, (participation.encrypted_secret_shares.clone(), data));
}
debug_assert_eq!(valid.len() + faulty.len(), participations.len());
// Perform the batch verification of the eVRFs
if !generators.0.verify(evrf_verifier) {
// If the batch failed, verify them each individually
for (i, participation) in participations {
if faulty.contains(i) {
continue;
}
let mut evrf_verifier = generators.0.batch_verifier();
Evrf::<C>::verify(
rng,
&generators.0,
&mut evrf_verifier,
context,
usize::from(t),
evrf_public_keys,
evrf_public_keys[usize::from(u16::from(*i)) - 1],
&participation.proof,
)
.expect("evrf failed basic checks yet prover wasn't prior marked faulty");
if !generators.0.verify(evrf_verifier) {
valid.remove(i);
faulty.insert(*i);
}
}
}
debug_assert_eq!(valid.len() + faulty.len(), participations.len());
// Perform the batch verification of the shares
let mut sum_encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
let mut sum_masks = HashMap::with_capacity(usize::from(n));
let mut all_encrypted_secret_shares = HashMap::with_capacity(usize::from(t));
{
let mut share_verification_statements_actual = HashMap::with_capacity(valid.len());
if !{
let mut g_scalar = C::F::ZERO;
let mut pairs = Vec::with_capacity(valid.len() * (usize::from(t) + evrf_public_keys.len()));
for (i, (encrypted_secret_shares, data)) in &valid {
let (this_g_scalar, mut these_pairs) = share_verification_statements::<C>(
&mut *rng,
&data.coefficients,
evrf_public_keys
.len()
.try_into()
.expect("n prior checked to be <= u16::MAX couldn't be converted to a u16"),
&data.encryption_commitments,
encrypted_secret_shares,
);
// Queue this into our batch
g_scalar += this_g_scalar;
pairs.extend(&these_pairs);
// Also push this g_scalar onto these_pairs so these_pairs can be verified individually
// upon error
these_pairs.push((this_g_scalar, generators.0.g()));
share_verification_statements_actual.insert(*i, these_pairs);
// Also format this data as we'd need it upon success
let mut formatted_encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
for (j, enc_share) in encrypted_secret_shares {
/*
We calculcate verification shares as the sum of the encrypted scalars, minus their
masks. This only does one scalar multiplication, and `1+t` point additions (with
one negation), and is accordingly much cheaper than interpolating the commitments.
This is only possible because already interpolated the commitments to verify the
encrypted secret share.
*/
let sum_encrypted_secret_share =
sum_encrypted_secret_shares.get(j).copied().unwrap_or(C::F::ZERO);
let sum_mask = sum_masks.get(j).copied().unwrap_or(C::G::identity());
sum_encrypted_secret_shares.insert(*j, sum_encrypted_secret_share + enc_share);
let j_index = usize::from(u16::from(*j)) - 1;
sum_masks.insert(*j, sum_mask + data.encryption_commitments[j_index]);
formatted_encrypted_secret_shares.insert(*j, (data.ecdh_keys[j_index], *enc_share));
}
all_encrypted_secret_shares.insert(*i, formatted_encrypted_secret_shares);
}
pairs.push((g_scalar, generators.0.g()));
bool::from(multiexp_vartime(&pairs).is_identity())
} {
// If the batch failed, verify them each individually
for (i, pairs) in share_verification_statements_actual {
if !bool::from(multiexp_vartime(&pairs).is_identity()) {
valid.remove(&i);
faulty.insert(i);
}
}
}
}
debug_assert_eq!(valid.len() + faulty.len(), participations.len());
let mut faulty = faulty.into_iter().collect::<Vec<_>>();
if !faulty.is_empty() {
faulty.sort_unstable();
return Ok(VerifyResult::Invalid(faulty));
}
// We check at least t key shares of people have participated in contributing entropy
// Since the key shares of the participants exceed t, meaning if they're malicious they can
// reconstruct the key regardless, this is safe to the threshold
{
let mut participating_weight = 0;
let mut evrf_public_keys_mut = evrf_public_keys.to_vec();
for i in valid.keys() {
let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1];
// Remove this key from the Vec to prevent double-counting
/*
Double-counting would be a risk if multiple participants shared an eVRF public key and
participated. This code does still allow such participants (in order to let participants
be weighted), and any one of them participating will count as all participating. This is
fine as any one such participant will be able to decrypt the shares for themselves and
all other participants, so this is still a key generated by an amount of participants who
could simply reconstruct the key.
*/
let start_len = evrf_public_keys_mut.len();
evrf_public_keys_mut.retain(|key| *key != evrf_public_key);
let end_len = evrf_public_keys_mut.len();
let count = start_len - end_len;
participating_weight += count;
}
if participating_weight < usize::from(t) {
return Ok(VerifyResult::NotEnoughParticipants);
}
}
// If we now have >= t participations, calculate the group key and verification shares
// The group key is the sum of the zero coefficients
let group_key = valid.values().map(|(_, evrf_data)| evrf_data.coefficients[0]).sum::<C::G>();
// Calculate each user's verification share
let mut verification_shares = HashMap::with_capacity(usize::from(n));
for i in (1 ..= n).map(Participant) {
verification_shares
.insert(i, (C::generator() * sum_encrypted_secret_shares[&i]) - sum_masks[&i]);
}
Ok(VerifyResult::Valid(EvrfDkg {
t,
n,
evrf_public_keys: evrf_public_keys.to_vec(),
group_key,
verification_shares,
encrypted_secret_shares: all_encrypted_secret_shares,
}))
}
pub fn keys(
&self,
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
) -> Vec<ThresholdKeys<C>> {
let evrf_public_key = <C::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref();
let mut is = Vec::with_capacity(1);
for (i, evrf_key) in self.evrf_public_keys.iter().enumerate() {
if *evrf_key == evrf_public_key {
let i = u16::try_from(i).expect("n <= u16::MAX yet i > u16::MAX?");
let i = Participant(1 + i);
is.push(i);
}
}
let mut res = Vec::with_capacity(is.len());
for i in is {
let mut secret_share = Zeroizing::new(C::F::ZERO);
for shares in self.encrypted_secret_shares.values() {
let (ecdh_keys, enc_share) = shares[&i];
let mut ecdh = Zeroizing::new(C::F::ZERO);
for point in ecdh_keys {
let (mut x, mut y) =
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(point * evrf_private_key.deref()).unwrap();
*ecdh += x;
x.zeroize();
y.zeroize();
}
*secret_share += enc_share - ecdh.deref();
}
debug_assert_eq!(self.verification_shares[&i], C::generator() * secret_share.deref());
res.push(ThresholdKeys::from(ThresholdCore {
params: ThresholdParams::new(self.t, self.n, i).unwrap(),
interpolation: Interpolation::Lagrange,
secret_share,
group_key: self.group_key,
verification_shares: self.verification_shares.clone(),
}));
}
res
}
}

View File

@@ -0,0 +1,696 @@
use core::{marker::PhantomData, ops::Deref, fmt};
use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use generic_array::{typenum::Unsigned, ArrayLength, GenericArray};
use blake2::{Digest, Blake2s256};
use ciphersuite::{
group::{ff::Field, Group, GroupEncoding},
Ciphersuite,
};
use generalized_bulletproofs::{
*,
transcript::{Transcript as ProverTranscript, VerifierTranscript},
arithmetic_circuit_proof::*,
};
use generalized_bulletproofs_circuit_abstraction::*;
use ec_divisors::{DivisorCurve, ScalarDecomposition};
use generalized_bulletproofs_ec_gadgets::*;
/// A pair of curves to perform the eVRF with.
pub trait EvrfCurve: Ciphersuite {
type EmbeddedCurve: Ciphersuite<G: DivisorCurve<FieldElement = <Self as Ciphersuite>::F>>;
type EmbeddedCurveParameters: DiscreteLogParameters;
}
#[cfg(feature = "evrf-secp256k1")]
impl EvrfCurve for ciphersuite::Secp256k1 {
type EmbeddedCurve = secq256k1::Secq256k1;
type EmbeddedCurveParameters = secq256k1::Secq256k1;
}
#[cfg(feature = "evrf-ed25519")]
impl EvrfCurve for ciphersuite::Ed25519 {
type EmbeddedCurve = embedwards25519::Embedwards25519;
type EmbeddedCurveParameters = embedwards25519::Embedwards25519;
}
#[cfg(feature = "evrf-ristretto")]
impl EvrfCurve for ciphersuite::Ristretto {
type EmbeddedCurve = embedwards25519::Embedwards25519;
type EmbeddedCurveParameters = embedwards25519::Embedwards25519;
}
fn sample_point<C: Ciphersuite>(rng: &mut (impl RngCore + CryptoRng)) -> C::G {
let mut repr = <C::G as GroupEncoding>::Repr::default();
loop {
rng.fill_bytes(repr.as_mut());
if let Ok(point) = C::read_G(&mut repr.as_ref()) {
if bool::from(!point.is_identity()) {
return point;
}
}
}
}
/// Generators for eVRF proof.
#[derive(Clone, Debug)]
pub struct EvrfGenerators<C: EvrfCurve>(pub(crate) Generators<C>);
impl<C: EvrfCurve> EvrfGenerators<C> {
/// Create a new set of generators.
pub fn new(max_threshold: u16, max_participants: u16) -> EvrfGenerators<C> {
let g = C::generator();
let mut rng = ChaCha20Rng::from_seed(Blake2s256::digest(g.to_bytes()).into());
let h = sample_point::<C>(&mut rng);
let (_, generators) =
Evrf::<C>::muls_and_generators_to_use(max_threshold.into(), max_participants.into());
let mut g_bold = vec![];
let mut h_bold = vec![];
for _ in 0 .. generators {
g_bold.push(sample_point::<C>(&mut rng));
h_bold.push(sample_point::<C>(&mut rng));
}
Self(Generators::new(g, h, g_bold, h_bold).unwrap())
}
}
/// The result of proving for an eVRF.
pub(crate) struct EvrfProveResult<C: Ciphersuite> {
/// The coefficients for use in the DKG.
pub(crate) coefficients: Vec<Zeroizing<C::F>>,
/// The masks to encrypt secret shares with.
pub(crate) encryption_masks: Vec<Zeroizing<C::F>>,
/// The proof itself.
pub(crate) proof: Vec<u8>,
}
/// The result of verifying an eVRF.
pub(crate) struct EvrfVerifyResult<C: EvrfCurve> {
/// The commitments to the coefficients for use in the DKG.
pub(crate) coefficients: Vec<C::G>,
/// The ephemeral public keys to perform ECDHs with
pub(crate) ecdh_keys: Vec<[<C::EmbeddedCurve as Ciphersuite>::G; 2]>,
/// The commitments to the masks used to encrypt secret shares with.
pub(crate) encryption_commitments: Vec<C::G>,
}
impl<C: EvrfCurve> fmt::Debug for EvrfVerifyResult<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt.debug_struct("EvrfVerifyResult").finish_non_exhaustive()
}
}
/// A struct to prove/verify eVRFs with.
pub(crate) struct Evrf<C: EvrfCurve>(PhantomData<C>);
impl<C: EvrfCurve> Evrf<C> {
// Sample uniform points (via rejection-sampling) on the embedded elliptic curve
fn transcript_to_points(
seed: [u8; 32],
coefficients: usize,
) -> Vec<<C::EmbeddedCurve as Ciphersuite>::G> {
// We need to do two Diffie-Hellman's per coefficient in order to achieve an unbiased result
let quantity = 2 * coefficients;
let mut rng = ChaCha20Rng::from_seed(seed);
let mut res = Vec::with_capacity(quantity);
for _ in 0 .. quantity {
res.push(sample_point::<C::EmbeddedCurve>(&mut rng));
}
res
}
/// Read a Variable from a theoretical vector commitment tape
fn read_one_from_tape(generators_to_use: usize, start: &mut usize) -> Variable {
// Each commitment has twice as many variables as generators in use
let commitment = *start / (2 * generators_to_use);
// The index will be less than the amount of generators in use, as half are left and half are
// right
let index = *start % generators_to_use;
let res = if (*start / generators_to_use) % 2 == 0 {
Variable::CG { commitment, index }
} else {
Variable::CH { commitment, index }
};
*start += 1;
res
}
/// Read a set of variables from a theoretical vector commitment tape
fn read_from_tape<N: ArrayLength>(
generators_to_use: usize,
start: &mut usize,
) -> GenericArray<Variable, N> {
let mut buf = Vec::with_capacity(N::USIZE);
for _ in 0 .. N::USIZE {
buf.push(Self::read_one_from_tape(generators_to_use, start));
}
GenericArray::from_slice(&buf).clone()
}
/// Read `PointWithDlog`s, which share a discrete logarithm, from the theoretical vector
/// commitment tape.
fn point_with_dlogs(
start: &mut usize,
quantity: usize,
generators_to_use: usize,
) -> Vec<PointWithDlog<C::EmbeddedCurveParameters>> {
// We define a serialized tape of the discrete logarithm, then for each divisor/point, we push:
// zero, x**i, y x**i, y, x_coord, y_coord
// We then chunk that into vector commitments
// Here, we take the assumed layout and generate the expected `Variable`s for this layout
let dlog = Self::read_from_tape(generators_to_use, start);
let mut res = Vec::with_capacity(quantity);
let mut read_point_with_dlog = || {
let zero = Self::read_one_from_tape(generators_to_use, start);
let x_from_power_of_2 = Self::read_from_tape(generators_to_use, start);
let yx = Self::read_from_tape(generators_to_use, start);
let y = Self::read_one_from_tape(generators_to_use, start);
let divisor = Divisor { zero, x_from_power_of_2, yx, y };
let point = (
Self::read_one_from_tape(generators_to_use, start),
Self::read_one_from_tape(generators_to_use, start),
);
res.push(PointWithDlog { dlog: dlog.clone(), divisor, point });
};
for _ in 0 .. quantity {
read_point_with_dlog();
}
res
}
fn muls_and_generators_to_use(coefficients: usize, ecdhs: usize) -> (usize, usize) {
const MULS_PER_DH: usize = 7;
// 1 DH to prove the discrete logarithm corresponds to the eVRF public key
// 2 DHs per generated coefficient
// 2 DHs per generated ECDH
let expected_muls = MULS_PER_DH * (1 + (2 * coefficients) + (2 * 2 * ecdhs));
let generators_to_use = {
let mut padded_pow_of_2 = 1;
while padded_pow_of_2 < expected_muls {
padded_pow_of_2 <<= 1;
}
// This may as small as 16, which would create an excessive amount of vector commitments
// We set a floor of 1024 rows for bandwidth reasons
padded_pow_of_2.max(1024)
};
(expected_muls, generators_to_use)
}
fn circuit(
curve_spec: &CurveSpec<C::F>,
evrf_public_key: (C::F, C::F),
coefficients: usize,
ecdh_commitments: &[[(C::F, C::F); 2]],
generator_tables: &[GeneratorTable<C::F, C::EmbeddedCurveParameters>],
circuit: &mut Circuit<C>,
transcript: &mut impl Transcript,
) {
let (expected_muls, generators_to_use) =
Self::muls_and_generators_to_use(coefficients, ecdh_commitments.len());
let (challenge, challenged_generators) =
circuit.discrete_log_challenge(transcript, curve_spec, generator_tables);
debug_assert_eq!(challenged_generators.len(), 1 + (2 * coefficients) + ecdh_commitments.len());
// The generators tables/challenged generators are expected to have the following layouts
// G, coefficients * [A, B], ecdhs * [P]
#[allow(non_snake_case)]
let challenged_G = &challenged_generators[0];
// Execute the circuit for the coefficients
let mut tape_pos = 0;
{
let mut point_with_dlogs =
Self::point_with_dlogs(&mut tape_pos, 1 + (2 * coefficients), generators_to_use)
.into_iter();
// Verify the discrete logarithm is in the fact the discrete logarithm of the eVRF public key
let point = circuit.discrete_log(
curve_spec,
point_with_dlogs.next().unwrap(),
&challenge,
challenged_G,
);
circuit.equality(LinComb::from(point.x()), &LinComb::empty().constant(evrf_public_key.0));
circuit.equality(LinComb::from(point.y()), &LinComb::empty().constant(evrf_public_key.1));
// Verify the DLog claims against the sampled points
for (i, pair) in challenged_generators[1 ..].chunks(2).take(coefficients).enumerate() {
let mut lincomb = LinComb::empty();
debug_assert_eq!(pair.len(), 2);
for challenged_generator in pair {
let point = circuit.discrete_log(
curve_spec,
point_with_dlogs.next().unwrap(),
&challenge,
challenged_generator,
);
// For each point in this pair, add its x coordinate to a lincomb
lincomb = lincomb.term(C::F::ONE, point.x());
}
// Constrain the sum of the two x coordinates to be equal to the value in the Pedersen
// commitment
circuit.equality(lincomb, &LinComb::from(Variable::V(i)));
}
debug_assert!(point_with_dlogs.next().is_none());
}
// Now execute the circuit for the ECDHs
let mut challenged_generators = challenged_generators.iter().skip(1 + (2 * coefficients));
for (i, ecdh) in ecdh_commitments.iter().enumerate() {
let challenged_generator = challenged_generators.next().unwrap();
let mut lincomb = LinComb::empty();
for ecdh in ecdh {
let mut point_with_dlogs =
Self::point_with_dlogs(&mut tape_pos, 2, generators_to_use).into_iter();
// One proof of the ECDH secret * G for the commitment published
let point = circuit.discrete_log(
curve_spec,
point_with_dlogs.next().unwrap(),
&challenge,
challenged_G,
);
circuit.equality(LinComb::from(point.x()), &LinComb::empty().constant(ecdh.0));
circuit.equality(LinComb::from(point.y()), &LinComb::empty().constant(ecdh.1));
// One proof of the ECDH secret * P for the ECDH
let point = circuit.discrete_log(
curve_spec,
point_with_dlogs.next().unwrap(),
&challenge,
challenged_generator,
);
// For each point in this pair, add its x coordinate to a lincomb
lincomb = lincomb.term(C::F::ONE, point.x());
}
// Constrain the sum of the two x coordinates to be equal to the value in the Pedersen
// commitment
circuit.equality(lincomb, &LinComb::from(Variable::V(coefficients + i)));
}
debug_assert_eq!(expected_muls, circuit.muls());
debug_assert!(challenged_generators.next().is_none());
}
/// Prove a point on an elliptic curve had its discrete logarithm generated via an eVRF.
pub(crate) fn prove(
rng: &mut (impl RngCore + CryptoRng),
generators: &Generators<C>,
transcript: [u8; 32],
coefficients: usize,
ecdh_public_keys: &[<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G],
evrf_private_key: &Zeroizing<<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F>,
) -> Result<EvrfProveResult<C>, AcError> {
let curve_spec = CurveSpec {
a: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::a(),
b: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::b(),
};
// A tape of the discrete logarithm, then [zero, x**i, y x**i, y, x_coord, y_coord]
let mut vector_commitment_tape = vec![];
let mut generator_tables = Vec::with_capacity(1 + (2 * coefficients) + ecdh_public_keys.len());
// A function to calculate a divisor and push it onto the tape
// This defines a vec, divisor_points, outside of the fn to reuse its allocation
let mut divisor =
|vector_commitment_tape: &mut Vec<_>,
dlog: &ScalarDecomposition<<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::F>,
push_generator: bool,
generator: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G,
dh: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G| {
if push_generator {
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(generator).unwrap();
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
}
let mut divisor = dlog.scalar_mul_divisor(generator).normalize_x_coefficient();
vector_commitment_tape.push(divisor.zero_coefficient);
for coefficient in divisor.x_coefficients.iter().skip(1) {
vector_commitment_tape.push(*coefficient);
}
for _ in divisor.x_coefficients.len() ..
<C::EmbeddedCurveParameters as DiscreteLogParameters>::XCoefficientsMinusOne::USIZE
{
vector_commitment_tape.push(<C as Ciphersuite>::F::ZERO);
}
for coefficient in divisor.yx_coefficients.first().unwrap_or(&vec![]) {
vector_commitment_tape.push(*coefficient);
}
for _ in divisor.yx_coefficients.first().unwrap_or(&vec![]).len() ..
<C::EmbeddedCurveParameters as DiscreteLogParameters>::YxCoefficients::USIZE
{
vector_commitment_tape.push(<C as Ciphersuite>::F::ZERO);
}
vector_commitment_tape
.push(divisor.y_coefficients.first().copied().unwrap_or(<C as Ciphersuite>::F::ZERO));
divisor.zeroize();
drop(divisor);
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(dh).unwrap();
vector_commitment_tape.push(x);
vector_commitment_tape.push(y);
(x, y)
};
// Start with the coefficients
let evrf_public_key;
let mut actual_coefficients = Vec::with_capacity(coefficients);
{
let dlog =
ScalarDecomposition::<<C::EmbeddedCurve as Ciphersuite>::F>::new(**evrf_private_key);
let points = Self::transcript_to_points(transcript, coefficients);
// Start by pushing the discrete logarithm onto the tape
for coefficient in dlog.decomposition() {
vector_commitment_tape.push(<_>::from(*coefficient));
}
// Push a divisor for proving that we're using the correct scalar
evrf_public_key = divisor(
&mut vector_commitment_tape,
&dlog,
true,
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator(),
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref(),
);
// Push a divisor for each point we use in the eVRF
for pair in points.chunks(2) {
let mut res = Zeroizing::new(C::F::ZERO);
for point in pair {
let (dh_x, _) = divisor(
&mut vector_commitment_tape,
&dlog,
true,
*point,
*point * evrf_private_key.deref(),
);
*res += dh_x;
}
actual_coefficients.push(res);
}
debug_assert_eq!(actual_coefficients.len(), coefficients);
}
// Now do the ECDHs for the encryption
let mut encryption_masks = Vec::with_capacity(ecdh_public_keys.len());
let mut ecdh_commitments = Vec::with_capacity(2 * ecdh_public_keys.len());
let mut ecdh_commitments_xy = Vec::with_capacity(ecdh_public_keys.len());
for ecdh_public_key in ecdh_public_keys {
ecdh_commitments_xy.push([(C::F::ZERO, C::F::ZERO); 2]);
let mut res = Zeroizing::new(C::F::ZERO);
for j in 0 .. 2 {
let mut ecdh_private_key;
loop {
ecdh_private_key = <C::EmbeddedCurve as Ciphersuite>::F::random(&mut *rng);
// Generate a non-0 ECDH private key, as necessary to not produce an identity output
// Identity isn't representable with the divisors, hence the explicit effort
if bool::from(!ecdh_private_key.is_zero()) {
break;
}
}
let dlog =
ScalarDecomposition::<<C::EmbeddedCurve as Ciphersuite>::F>::new(ecdh_private_key);
let ecdh_commitment = <C::EmbeddedCurve as Ciphersuite>::generator() * ecdh_private_key;
ecdh_commitments.push(ecdh_commitment);
ecdh_commitments_xy.last_mut().unwrap()[j] =
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_commitment).unwrap();
// Start by pushing the discrete logarithm onto the tape
for coefficient in dlog.decomposition() {
vector_commitment_tape.push(<_>::from(*coefficient));
}
// Push a divisor for proving that we're using the correct scalar for the commitment
divisor(
&mut vector_commitment_tape,
&dlog,
false,
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator(),
<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::generator() * ecdh_private_key,
);
// Push a divisor for the key we're performing the ECDH with
let (dh_x, _) = divisor(
&mut vector_commitment_tape,
&dlog,
j == 0,
*ecdh_public_key,
*ecdh_public_key * ecdh_private_key,
);
*res += dh_x;
ecdh_private_key.zeroize();
}
encryption_masks.push(res);
}
debug_assert_eq!(encryption_masks.len(), ecdh_public_keys.len());
// Now that we have the vector commitment tape, chunk it
let (_, generators_to_use) =
Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len());
let mut vector_commitments =
Vec::with_capacity(vector_commitment_tape.len().div_ceil(2 * generators_to_use));
for chunk in vector_commitment_tape.chunks(2 * generators_to_use) {
let g_values = chunk[.. generators_to_use.min(chunk.len())].to_vec().into();
let h_values = chunk[generators_to_use.min(chunk.len()) ..].to_vec().into();
vector_commitments.push(PedersenVectorCommitment {
g_values,
h_values,
mask: C::F::random(&mut *rng),
});
}
vector_commitment_tape.zeroize();
drop(vector_commitment_tape);
let mut commitments = Vec::with_capacity(coefficients + ecdh_public_keys.len());
for coefficient in &actual_coefficients {
commitments.push(PedersenCommitment { value: **coefficient, mask: C::F::random(&mut *rng) });
}
for enc_mask in &encryption_masks {
commitments.push(PedersenCommitment { value: **enc_mask, mask: C::F::random(&mut *rng) });
}
let mut transcript = ProverTranscript::new(transcript);
let commited_commitments = transcript.write_commitments(
vector_commitments
.iter()
.map(|commitment| {
commitment
.commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h())
.ok_or(AcError::NotEnoughGenerators)
})
.collect::<Result<_, _>>()?,
commitments
.iter()
.map(|commitment| commitment.commit(generators.g(), generators.h()))
.collect(),
);
for ecdh_commitment in ecdh_commitments {
transcript.push_point(ecdh_commitment);
}
let mut circuit = Circuit::prove(vector_commitments, commitments.clone());
Self::circuit(
&curve_spec,
evrf_public_key,
coefficients,
&ecdh_commitments_xy,
&generator_tables,
&mut circuit,
&mut transcript,
);
let (statement, Some(witness)) = circuit
.statement(
generators.reduce(generators_to_use).ok_or(AcError::NotEnoughGenerators)?,
commited_commitments,
)
.unwrap()
else {
panic!("proving yet wasn't yielded the witness");
};
statement.prove(&mut *rng, &mut transcript, witness).unwrap();
// Push the reveal onto the transcript
for commitment in &commitments {
transcript.push_point(generators.g() * commitment.value);
}
// Define a weight to aggregate the commitments with
let mut agg_weights = Vec::with_capacity(commitments.len());
agg_weights.push(C::F::ONE);
while agg_weights.len() < commitments.len() {
agg_weights.push(transcript.challenge::<C::F>());
}
let mut x = commitments
.iter()
.zip(&agg_weights)
.map(|(commitment, weight)| commitment.mask * *weight)
.sum::<C::F>();
// Do a Schnorr PoK for the randomness of the aggregated Pedersen commitment
let mut r = C::F::random(&mut *rng);
transcript.push_point(generators.h() * r);
let c = transcript.challenge::<C::F>();
transcript.push_scalar(r + (c * x));
r.zeroize();
x.zeroize();
Ok(EvrfProveResult {
coefficients: actual_coefficients,
encryption_masks,
proof: transcript.complete(),
})
}
/// Verify an eVRF proof, returning the commitments output.
#[allow(clippy::too_many_arguments)]
pub(crate) fn verify(
rng: &mut (impl RngCore + CryptoRng),
generators: &Generators<C>,
verifier: &mut BatchVerifier<C>,
transcript: [u8; 32],
coefficients: usize,
ecdh_public_keys: &[<<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G],
evrf_public_key: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G,
proof: &[u8],
) -> Result<EvrfVerifyResult<C>, ()> {
let curve_spec = CurveSpec {
a: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::a(),
b: <<C as EvrfCurve>::EmbeddedCurve as Ciphersuite>::G::b(),
};
let mut generator_tables = Vec::with_capacity(1 + (2 * coefficients) + ecdh_public_keys.len());
{
let (x, y) =
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(<C::EmbeddedCurve as Ciphersuite>::generator())
.unwrap();
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
}
let points = Self::transcript_to_points(transcript, coefficients);
for generator in points {
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(generator).unwrap();
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
}
for generator in ecdh_public_keys {
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(*generator).unwrap();
generator_tables.push(GeneratorTable::new(&curve_spec, x, y));
}
let (_, generators_to_use) =
Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len());
let mut transcript = VerifierTranscript::new(transcript, proof);
let dlog_len = <C::EmbeddedCurveParameters as DiscreteLogParameters>::ScalarBits::USIZE;
let divisor_len = 1 +
<C::EmbeddedCurveParameters as DiscreteLogParameters>::XCoefficientsMinusOne::USIZE +
<C::EmbeddedCurveParameters as DiscreteLogParameters>::YxCoefficients::USIZE +
1;
let dlog_proof_len = divisor_len + 2;
let coeffs_vc_variables = dlog_len + ((1 + (2 * coefficients)) * dlog_proof_len);
let ecdhs_vc_variables = ((2 * ecdh_public_keys.len()) * dlog_len) +
((2 * 2 * ecdh_public_keys.len()) * dlog_proof_len);
let vcs = (coeffs_vc_variables + ecdhs_vc_variables).div_ceil(2 * generators_to_use);
let all_commitments =
transcript.read_commitments(vcs, coefficients + ecdh_public_keys.len()).map_err(|_| ())?;
let commitments = all_commitments.V().to_vec();
let mut ecdh_keys = Vec::with_capacity(ecdh_public_keys.len());
let mut ecdh_keys_xy = Vec::with_capacity(ecdh_public_keys.len());
for _ in 0 .. ecdh_public_keys.len() {
let ecdh_keys_i = [
transcript.read_point::<C::EmbeddedCurve>().map_err(|_| ())?,
transcript.read_point::<C::EmbeddedCurve>().map_err(|_| ())?,
];
ecdh_keys.push(ecdh_keys_i);
// This bans zero ECDH keys
ecdh_keys_xy.push([
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_keys_i[0]).ok_or(())?,
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_keys_i[1]).ok_or(())?,
]);
}
let mut circuit = Circuit::verify();
Self::circuit(
&curve_spec,
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(evrf_public_key).ok_or(())?,
coefficients,
&ecdh_keys_xy,
&generator_tables,
&mut circuit,
&mut transcript,
);
let (statement, None) =
circuit.statement(generators.reduce(generators_to_use).ok_or(())?, all_commitments).unwrap()
else {
panic!("verifying yet was yielded a witness");
};
statement.verify(rng, verifier, &mut transcript).map_err(|_| ())?;
// Read the openings for the commitments
let mut openings = Vec::with_capacity(commitments.len());
for _ in 0 .. commitments.len() {
openings.push(transcript.read_point::<C>().map_err(|_| ())?);
}
// Verify the openings of the commitments
let mut agg_weights = Vec::with_capacity(commitments.len());
agg_weights.push(C::F::ONE);
while agg_weights.len() < commitments.len() {
agg_weights.push(transcript.challenge::<C::F>());
}
let sum_points =
openings.iter().zip(&agg_weights).map(|(point, weight)| *point * *weight).sum::<C::G>();
let sum_commitments =
commitments.into_iter().zip(agg_weights).map(|(point, weight)| point * weight).sum::<C::G>();
#[allow(non_snake_case)]
let A = sum_commitments - sum_points;
#[allow(non_snake_case)]
let R = transcript.read_point::<C>().map_err(|_| ())?;
let c = transcript.challenge::<C::F>();
let s = transcript.read_scalar::<C>().map_err(|_| ())?;
// Doesn't batch verify this as we can't access the internals of the GBP batch verifier
if (R + (A * c)) != (generators.h() * s) {
Err(())?;
}
if !transcript.complete().is_empty() {
Err(())?
};
let encryption_commitments = openings[coefficients ..].to_vec();
let coefficients = openings[.. coefficients].to_vec();
Ok(EvrfVerifyResult { coefficients, ecdh_keys, encryption_commitments })
}
}

View File

@@ -4,7 +4,6 @@
use core::fmt::{self, Debug};
#[cfg(feature = "std")]
use thiserror::Error;
use zeroize::Zeroize;
@@ -21,6 +20,10 @@ pub mod encryption;
#[cfg(feature = "std")]
pub mod pedpop;
/// The one-round DKG described in the [eVRF paper](https://eprint.iacr.org/2024/397).
#[cfg(all(feature = "std", feature = "evrf"))]
pub mod evrf;
/// Promote keys between ciphersuites.
#[cfg(feature = "std")]
pub mod promote;
@@ -63,8 +66,7 @@ impl fmt::Display for Participant {
}
/// Various errors possible during key generation.
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(Error))]
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum DkgError<B: Clone + PartialEq + Eq + Debug> {
/// A parameter was zero.
#[cfg_attr(feature = "std", error("a parameter was 0 (threshold {0}, participants {1})"))]
@@ -205,25 +207,37 @@ mod lib {
}
}
/// Calculate the lagrange coefficient for a signing set.
pub fn lagrange<F: PrimeField>(i: Participant, included: &[Participant]) -> F {
let i_f = F::from(u64::from(u16::from(i)));
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum Interpolation<F: Zeroize + PrimeField> {
Constant(Vec<F>),
Lagrange,
}
let mut num = F::ONE;
let mut denom = F::ONE;
for l in included {
if i == *l {
continue;
impl<F: Zeroize + PrimeField> Interpolation<F> {
pub(crate) fn interpolation_factor(&self, i: Participant, included: &[Participant]) -> F {
match self {
Interpolation::Constant(c) => c[usize::from(u16::from(i) - 1)],
Interpolation::Lagrange => {
let i_f = F::from(u64::from(u16::from(i)));
let mut num = F::ONE;
let mut denom = F::ONE;
for l in included {
if i == *l {
continue;
}
let share = F::from(u64::from(u16::from(*l)));
num *= share;
denom *= share - i_f;
}
// Safe as this will only be 0 if we're part of the above loop
// (which we have an if case to avoid)
num * denom.invert().unwrap()
}
}
let share = F::from(u64::from(u16::from(*l)));
num *= share;
denom *= share - i_f;
}
// Safe as this will only be 0 if we're part of the above loop
// (which we have an if case to avoid)
num * denom.invert().unwrap()
}
/// Keys and verification shares generated by a DKG.
@@ -232,6 +246,8 @@ mod lib {
pub struct ThresholdCore<C: Ciphersuite> {
/// Threshold Parameters.
pub(crate) params: ThresholdParams,
/// The interpolation method used.
pub(crate) interpolation: Interpolation<C::F>,
/// Secret share key.
pub(crate) secret_share: Zeroizing<C::F>,
@@ -246,6 +262,7 @@ mod lib {
fmt
.debug_struct("ThresholdCore")
.field("params", &self.params)
.field("interpolation", &self.interpolation)
.field("group_key", &self.group_key)
.field("verification_shares", &self.verification_shares)
.finish_non_exhaustive()
@@ -255,6 +272,7 @@ mod lib {
impl<C: Ciphersuite> Zeroize for ThresholdCore<C> {
fn zeroize(&mut self) {
self.params.zeroize();
self.interpolation.zeroize();
self.secret_share.zeroize();
self.group_key.zeroize();
for share in self.verification_shares.values_mut() {
@@ -266,16 +284,14 @@ mod lib {
impl<C: Ciphersuite> ThresholdCore<C> {
pub(crate) fn new(
params: ThresholdParams,
interpolation: Interpolation<C::F>,
secret_share: Zeroizing<C::F>,
verification_shares: HashMap<Participant, C::G>,
) -> ThresholdCore<C> {
let t = (1 ..= params.t()).map(Participant).collect::<Vec<_>>();
ThresholdCore {
params,
secret_share,
group_key: t.iter().map(|i| verification_shares[i] * lagrange::<C::F>(*i, &t)).sum(),
verification_shares,
}
let group_key =
t.iter().map(|i| verification_shares[i] * interpolation.interpolation_factor(*i, &t)).sum();
ThresholdCore { params, interpolation, secret_share, group_key, verification_shares }
}
/// Parameters for these keys.
@@ -304,6 +320,15 @@ mod lib {
writer.write_all(&self.params.t.to_le_bytes())?;
writer.write_all(&self.params.n.to_le_bytes())?;
writer.write_all(&self.params.i.to_bytes())?;
match &self.interpolation {
Interpolation::Constant(c) => {
writer.write_all(&[0])?;
for c in c {
writer.write_all(c.to_repr().as_ref())?;
}
}
Interpolation::Lagrange => writer.write_all(&[1])?,
};
let mut share_bytes = self.secret_share.to_repr();
writer.write_all(share_bytes.as_ref())?;
share_bytes.as_mut().zeroize();
@@ -352,6 +377,20 @@ mod lib {
)
};
let mut interpolation = [0];
reader.read_exact(&mut interpolation)?;
let interpolation = match interpolation[0] {
0 => Interpolation::Constant({
let mut res = Vec::with_capacity(usize::from(n));
for _ in 0 .. n {
res.push(C::read_F(reader)?);
}
res
}),
1 => Interpolation::Lagrange,
_ => Err(io::Error::other("invalid interpolation method"))?,
};
let secret_share = Zeroizing::new(C::read_F(reader)?);
let mut verification_shares = HashMap::new();
@@ -361,6 +400,7 @@ mod lib {
Ok(ThresholdCore::new(
ThresholdParams::new(t, n, i).map_err(|_| io::Error::other("invalid parameters"))?,
interpolation,
secret_share,
verification_shares,
))
@@ -383,6 +423,7 @@ mod lib {
/// View of keys, interpolated and offset for usage.
#[derive(Clone)]
pub struct ThresholdView<C: Ciphersuite> {
interpolation: Interpolation<C::F>,
offset: C::F,
group_key: C::G,
included: Vec<Participant>,
@@ -395,6 +436,7 @@ mod lib {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("ThresholdView")
.field("interpolation", &self.interpolation)
.field("offset", &self.offset)
.field("group_key", &self.group_key)
.field("included", &self.included)
@@ -480,12 +522,13 @@ mod lib {
included.sort();
let mut secret_share = Zeroizing::new(
lagrange::<C::F>(self.params().i(), &included) * self.secret_share().deref(),
self.core.interpolation.interpolation_factor(self.params().i(), &included) *
self.secret_share().deref(),
);
let mut verification_shares = self.verification_shares();
for (i, share) in &mut verification_shares {
*share *= lagrange::<C::F>(*i, &included);
*share *= self.core.interpolation.interpolation_factor(*i, &included);
}
// The offset is included by adding it to the participant with the lowest ID
@@ -496,6 +539,7 @@ mod lib {
*verification_shares.get_mut(&included[0]).unwrap() += C::generator() * offset;
Ok(ThresholdView {
interpolation: self.core.interpolation.clone(),
offset,
group_key: self.group_key(),
secret_share,
@@ -528,6 +572,14 @@ mod lib {
&self.included
}
/// Return the interpolation factor for a signer.
pub fn interpolation_factor(&self, participant: Participant) -> Option<C::F> {
if !self.included.contains(&participant) {
None?
}
Some(self.interpolation.interpolation_factor(participant, &self.included))
}
/// Return the interpolated, offset secret share.
pub fn secret_share(&self) -> &Zeroizing<C::F> {
&self.secret_share

View File

@@ -7,8 +7,6 @@ use std_shims::collections::HashMap;
#[cfg(feature = "std")]
use zeroize::Zeroizing;
#[cfg(feature = "std")]
use ciphersuite::group::ff::Field;
use ciphersuite::{
group::{Group, GroupEncoding},
Ciphersuite,
@@ -16,7 +14,7 @@ use ciphersuite::{
use crate::DkgError;
#[cfg(feature = "std")]
use crate::{Participant, ThresholdParams, ThresholdCore, lagrange};
use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore};
fn check_keys<C: Ciphersuite>(keys: &[C::G]) -> Result<u16, DkgError<()>> {
if keys.is_empty() {
@@ -67,6 +65,7 @@ pub fn musig_key<C: Ciphersuite>(context: &[u8], keys: &[C::G]) -> Result<C::G,
let transcript = binding_factor_transcript::<C>(context, keys)?;
let mut res = C::G::identity();
for i in 1 ..= keys_len {
// TODO: Calculate this with a multiexp
res += keys[usize::from(i - 1)] * binding_factor::<C>(transcript.clone(), i);
}
Ok(res)
@@ -104,38 +103,26 @@ pub fn musig<C: Ciphersuite>(
binding.push(binding_factor::<C>(transcript.clone(), i));
}
// Multiply our private key by our binding factor
let mut secret_share = private_key.clone();
*secret_share *= binding[pos];
// Our secret share is our private key
let secret_share = private_key.clone();
// Calculate verification shares
let mut verification_shares = HashMap::new();
// When this library offers a ThresholdView for a specific signing set, it applies the lagrange
// factor
// Since this is a n-of-n scheme, there's only one possible signing set, and one possible
// lagrange factor
// In the name of simplicity, we define the group key as the sum of all bound keys
// Accordingly, the secret share must be multiplied by the inverse of the lagrange factor, along
// with all verification shares
// This is less performant than simply defining the group key as the sum of all post-lagrange
// bound keys, yet the simplicity is preferred
let included = (1 ..= keys_len)
// This error also shouldn't be possible, for the same reasons as documented above
.map(|l| Participant::new(l).ok_or(DkgError::InvalidSigningSet))
.collect::<Result<Vec<_>, _>>()?;
let mut group_key = C::G::identity();
for (l, p) in included.iter().enumerate() {
let bound = keys[l] * binding[l];
group_key += bound;
for l in 1 ..= keys_len {
let key = keys[usize::from(l) - 1];
group_key += key * binding[usize::from(l - 1)];
let lagrange_inv = lagrange::<C::F>(*p, &included).invert().unwrap();
if params.i() == *p {
*secret_share *= lagrange_inv;
}
verification_shares.insert(*p, bound * lagrange_inv);
// These errors also shouldn't be possible, for the same reasons as documented above
verification_shares.insert(Participant::new(l).ok_or(DkgError::InvalidSigningSet)?, key);
}
debug_assert_eq!(C::generator() * secret_share.deref(), verification_shares[&params.i()]);
debug_assert_eq!(musig_key::<C>(context, keys).unwrap(), group_key);
Ok(ThresholdCore { params, secret_share, group_key, verification_shares })
Ok(ThresholdCore::new(
params,
Interpolation::Constant(binding),
secret_share,
verification_shares,
))
}

View File

@@ -22,9 +22,9 @@ use multiexp::{multiexp_vartime, BatchVerifier};
use schnorr::SchnorrSignature;
use crate::{
Participant, DkgError, ThresholdParams, ThresholdCore, validate_map,
Participant, DkgError, ThresholdParams, Interpolation, ThresholdCore, validate_map,
encryption::{
ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, EncryptionKeyProof,
ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, Decryption, EncryptionKeyProof,
DecryptionError,
},
};
@@ -32,10 +32,10 @@ use crate::{
type FrostError<C> = DkgError<EncryptionKeyProof<C>>;
#[allow(non_snake_case)]
fn challenge<C: Ciphersuite>(context: &str, l: Participant, R: &[u8], Am: &[u8]) -> C::F {
fn challenge<C: Ciphersuite>(context: [u8; 32], l: Participant, R: &[u8], Am: &[u8]) -> C::F {
let mut transcript = RecommendedTranscript::new(b"DKG FROST v0.2");
transcript.domain_separate(b"schnorr_proof_of_knowledge");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript.append_message(b"participant", l.to_bytes());
transcript.append_message(b"nonce", R);
transcript.append_message(b"commitments", Am);
@@ -86,15 +86,15 @@ impl<C: Ciphersuite> ReadWrite for Commitments<C> {
#[derive(Debug, Zeroize)]
pub struct KeyGenMachine<C: Ciphersuite> {
params: ThresholdParams,
context: String,
context: [u8; 32],
_curve: PhantomData<C>,
}
impl<C: Ciphersuite> KeyGenMachine<C> {
/// Create a new machine to generate a key.
///
/// The context string should be unique among multisigs.
pub fn new(params: ThresholdParams, context: String) -> KeyGenMachine<C> {
/// The context should be unique among multisigs.
pub fn new(params: ThresholdParams, context: [u8; 32]) -> KeyGenMachine<C> {
KeyGenMachine { params, context, _curve: PhantomData }
}
@@ -129,11 +129,11 @@ impl<C: Ciphersuite> KeyGenMachine<C> {
// There's no reason to spend the time and effort to make this deterministic besides a
// general obsession with canonicity and determinism though
r,
challenge::<C>(&self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
challenge::<C>(self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
);
// Additionally create an encryption mechanism to protect the secret shares
let encryption = Encryption::new(self.context.clone(), Some(self.params.i), rng);
let encryption = Encryption::new(self.context, self.params.i, rng);
// Step 4: Broadcast
let msg =
@@ -225,7 +225,7 @@ impl<F: PrimeField> ReadWrite for SecretShare<F> {
#[derive(Zeroize)]
pub struct SecretShareMachine<C: Ciphersuite> {
params: ThresholdParams,
context: String,
context: [u8; 32],
coefficients: Vec<Zeroizing<C::F>>,
our_commitments: Vec<C::G>,
encryption: Encryption<C>,
@@ -274,7 +274,7 @@ impl<C: Ciphersuite> SecretShareMachine<C> {
&mut batch,
l,
msg.commitments[0],
challenge::<C>(&self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
challenge::<C>(self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
);
commitments.insert(l, msg.commitments.drain(..).collect::<Vec<_>>());
@@ -472,9 +472,10 @@ impl<C: Ciphersuite> KeyMachine<C> {
let KeyMachine { commitments, encryption, params, secret } = self;
Ok(BlameMachine {
commitments,
encryption,
encryption: encryption.into_decryption(),
result: Some(ThresholdCore {
params,
interpolation: Interpolation::Lagrange,
secret_share: secret,
group_key: stripes[0],
verification_shares,
@@ -486,7 +487,7 @@ impl<C: Ciphersuite> KeyMachine<C> {
/// A machine capable of handling blame proofs.
pub struct BlameMachine<C: Ciphersuite> {
commitments: HashMap<Participant, Vec<C::G>>,
encryption: Encryption<C>,
encryption: Decryption<C>,
result: Option<ThresholdCore<C>>,
}
@@ -505,7 +506,6 @@ impl<C: Ciphersuite> Zeroize for BlameMachine<C> {
for commitments in self.commitments.values_mut() {
commitments.zeroize();
}
self.encryption.zeroize();
self.result.zeroize();
}
}
@@ -598,14 +598,13 @@ impl<C: Ciphersuite> AdditionalBlameMachine<C> {
/// authenticated as having come from the supposed party and verified as valid. Usage of invalid
/// commitments is considered undefined behavior, and may cause everything from inaccurate blame
/// to panics.
pub fn new<R: RngCore + CryptoRng>(
rng: &mut R,
context: String,
pub fn new(
context: [u8; 32],
n: u16,
mut commitment_msgs: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
) -> Result<Self, FrostError<C>> {
let mut commitments = HashMap::new();
let mut encryption = Encryption::new(context, None, rng);
let mut encryption = Decryption::new(context);
for i in 1 ..= n {
let i = Participant::new(i).unwrap();
let Some(msg) = commitment_msgs.remove(&i) else { Err(DkgError::MissingParticipant(i))? };

View File

@@ -113,6 +113,7 @@ impl<C1: Ciphersuite, C2: Ciphersuite<F = C1::F, G = C1::G>> GeneratorPromotion<
Ok(ThresholdKeys {
core: Arc::new(ThresholdCore::new(
params,
self.base.core.interpolation.clone(),
self.base.secret_share().clone(),
verification_shares,
)),

View File

@@ -0,0 +1,79 @@
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use rand::seq::SliceRandom;
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{
Participant,
evrf::*,
tests::{THRESHOLD, PARTICIPANTS, recover_key},
};
mod proof;
use proof::{Pallas, Vesta};
#[test]
fn evrf_dkg() {
let generators = EvrfGenerators::<Pallas>::new(THRESHOLD, PARTICIPANTS);
let context = [0; 32];
let mut priv_keys = vec![];
let mut pub_keys = vec![];
for i in 0 .. PARTICIPANTS {
let priv_key = <Vesta as Ciphersuite>::F::random(&mut OsRng);
pub_keys.push(<Vesta as Ciphersuite>::generator() * priv_key);
priv_keys.push((Participant::new(1 + i).unwrap(), Zeroizing::new(priv_key)));
}
let mut participations = HashMap::new();
// Shuffle the private keys so we iterate over a random subset of them
priv_keys.shuffle(&mut OsRng);
for (i, priv_key) in priv_keys.iter().take(usize::from(THRESHOLD)) {
participations.insert(
*i,
EvrfDkg::<Pallas>::participate(
&mut OsRng,
&generators,
context,
THRESHOLD,
&pub_keys,
priv_key,
)
.unwrap(),
);
}
let VerifyResult::Valid(dkg) = EvrfDkg::<Pallas>::verify(
&mut OsRng,
&generators,
context,
THRESHOLD,
&pub_keys,
&participations,
)
.unwrap() else {
panic!("verify didn't return VerifyResult::Valid")
};
let mut group_key = None;
let mut verification_shares = None;
let mut all_keys = HashMap::new();
for (i, priv_key) in priv_keys {
let keys = dkg.keys(&priv_key).into_iter().next().unwrap();
assert_eq!(keys.params().i(), i);
assert_eq!(keys.params().t(), THRESHOLD);
assert_eq!(keys.params().n(), PARTICIPANTS);
group_key = group_key.or(Some(keys.group_key()));
verification_shares = verification_shares.or(Some(keys.verification_shares()));
assert_eq!(Some(keys.group_key()), group_key);
assert_eq!(Some(keys.verification_shares()), verification_shares);
all_keys.insert(i, keys);
}
// TODO: Test for all possible combinations of keys
assert_eq!(Pallas::generator() * recover_key(&all_keys), group_key.unwrap());
}

View File

@@ -0,0 +1,118 @@
use std::time::Instant;
use rand_core::OsRng;
use zeroize::{Zeroize, Zeroizing};
use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2};
use blake2::{Digest, Blake2b512};
use ciphersuite::{
group::{
ff::{FromUniformBytes, Field, PrimeField},
Group,
},
Ciphersuite, Secp256k1, Ed25519, Ristretto,
};
use pasta_curves::{Ep, Eq, Fp, Fq};
use generalized_bulletproofs::tests::generators;
use generalized_bulletproofs_ec_gadgets::DiscreteLogParameters;
use crate::evrf::proof::*;
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct Pallas;
impl Ciphersuite for Pallas {
type F = Fq;
type G = Ep;
type H = Blake2b512;
const ID: &'static [u8] = b"Pallas";
fn generator() -> Ep {
Ep::generator()
}
fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F {
// This naive concat may be insecure in a real world deployment
// This is solely test code so it's fine
Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into())
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct Vesta;
impl Ciphersuite for Vesta {
type F = Fp;
type G = Eq;
type H = Blake2b512;
const ID: &'static [u8] = b"Vesta";
fn generator() -> Eq {
Eq::generator()
}
fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F {
// This naive concat may be insecure in a real world deployment
// This is solely test code so it's fine
Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into())
}
}
pub struct VestaParams;
impl DiscreteLogParameters for VestaParams {
type ScalarBits = U<{ <<Vesta as Ciphersuite>::F as PrimeField>::NUM_BITS as usize }>;
type XCoefficients = Quot<Sum<Self::ScalarBits, U1>, U2>;
type XCoefficientsMinusOne = Diff<Self::XCoefficients, U1>;
type YxCoefficients = Diff<Quot<Sum<Sum<Self::ScalarBits, U1>, U1>, U2>, U2>;
}
impl EvrfCurve for Pallas {
type EmbeddedCurve = Vesta;
type EmbeddedCurveParameters = VestaParams;
}
fn evrf_proof_test<C: EvrfCurve>() {
let generators = generators(1024);
let vesta_private_key = Zeroizing::new(<C::EmbeddedCurve as Ciphersuite>::F::random(&mut OsRng));
let ecdh_public_keys = [
<C::EmbeddedCurve as Ciphersuite>::G::random(&mut OsRng),
<C::EmbeddedCurve as Ciphersuite>::G::random(&mut OsRng),
];
let time = Instant::now();
let res =
Evrf::<C>::prove(&mut OsRng, &generators, [0; 32], 1, &ecdh_public_keys, &vesta_private_key)
.unwrap();
println!("Proving time: {:?}", time.elapsed());
let time = Instant::now();
let mut verifier = generators.batch_verifier();
Evrf::<C>::verify(
&mut OsRng,
&generators,
&mut verifier,
[0; 32],
1,
&ecdh_public_keys,
C::EmbeddedCurve::generator() * *vesta_private_key,
&res.proof,
)
.unwrap();
assert!(generators.verify(verifier));
println!("Verifying time: {:?}", time.elapsed());
}
#[test]
fn pallas_evrf_proof_test() {
evrf_proof_test::<Pallas>();
}
#[test]
fn secp256k1_evrf_proof_test() {
evrf_proof_test::<Secp256k1>();
}
#[test]
fn ed25519_evrf_proof_test() {
evrf_proof_test::<Ed25519>();
}
#[test]
fn ristretto_evrf_proof_test() {
evrf_proof_test::<Ristretto>();
}

View File

@@ -6,7 +6,7 @@ use rand_core::{RngCore, CryptoRng};
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{Participant, ThresholdCore, ThresholdKeys, lagrange, musig::musig as musig_fn};
use crate::{Participant, ThresholdCore, ThresholdKeys, musig::musig as musig_fn};
mod musig;
pub use musig::test_musig;
@@ -19,6 +19,9 @@ use pedpop::pedpop_gen;
mod promote;
use promote::test_generator_promotion;
#[cfg(all(test, feature = "evrf"))]
mod evrf;
/// Constant amount of participants to use when testing.
pub const PARTICIPANTS: u16 = 5;
/// Constant threshold of participants to use when testing.
@@ -43,7 +46,8 @@ pub fn recover_key<C: Ciphersuite>(keys: &HashMap<Participant, ThresholdKeys<C>>
let included = keys.keys().copied().collect::<Vec<_>>();
let group_private = keys.iter().fold(C::F::ZERO, |accum, (i, keys)| {
accum + (lagrange::<C::F>(*i, &included) * keys.secret_share().deref())
accum +
(first.core.interpolation.interpolation_factor(*i, &included) * keys.secret_share().deref())
});
assert_eq!(C::generator() * group_private, first.group_key(), "failed to recover keys");
group_private

View File

@@ -14,7 +14,7 @@ use crate::{
type PedPoPEncryptedMessage<C> = EncryptedMessage<C, SecretShare<<C as Ciphersuite>::F>>;
type PedPoPSecretShares<C> = HashMap<Participant, PedPoPEncryptedMessage<C>>;
const CONTEXT: &str = "DKG Test Key Generation";
const CONTEXT: [u8; 32] = *b"DKG Test Key Generation ";
// Commit, then return commitment messages, enc keys, and shares
#[allow(clippy::type_complexity)]
@@ -31,7 +31,7 @@ fn commit_enc_keys_and_shares<R: RngCore + CryptoRng, C: Ciphersuite>(
let mut enc_keys = HashMap::new();
for i in (1 ..= PARTICIPANTS).map(Participant) {
let params = ThresholdParams::new(THRESHOLD, PARTICIPANTS, i).unwrap();
let machine = KeyGenMachine::<C>::new(params, CONTEXT.to_string());
let machine = KeyGenMachine::<C>::new(params, CONTEXT);
let (machine, these_commitments) = machine.generate_coefficients(rng);
machines.insert(i, machine);
@@ -147,14 +147,12 @@ mod literal {
// Verify machines constructed with AdditionalBlameMachine::new work
assert_eq!(
AdditionalBlameMachine::new(
&mut OsRng,
CONTEXT.to_string(),
PARTICIPANTS,
commitment_msgs.clone()
)
.unwrap()
.blame(ONE, TWO, msg.clone(), blame.clone()),
AdditionalBlameMachine::new(CONTEXT, PARTICIPANTS, commitment_msgs.clone()).unwrap().blame(
ONE,
TWO,
msg.clone(),
blame.clone()
),
ONE,
);
}

View File

@@ -6,7 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dleq"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.79"
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
@@ -18,7 +18,7 @@ workspace = true
[dependencies]
rustversion = "1"
thiserror = { version = "1", optional = true }
thiserror = { version = "2", default-features = false, optional = true }
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
@@ -44,7 +44,7 @@ dalek-ff-group = { path = "../dalek-ff-group" }
transcript = { package = "flexible-transcript", path = "../transcript", features = ["recommended"] }
[features]
std = ["rand_core/std", "zeroize/std", "digest/std", "transcript/std", "ff/std", "multiexp?/std"]
std = ["thiserror?/std", "rand_core/std", "zeroize/std", "digest/std", "transcript/std", "ff/std", "multiexp?/std"]
serialize = ["std"]
# Needed for cross-group DLEqs

View File

@@ -92,7 +92,7 @@ impl<G: PrimeGroup> Generators<G> {
}
/// Error for cross-group DLEq proofs.
#[derive(Error, PartialEq, Eq, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)]
pub enum DLEqError {
/// Invalid proof length.
#[error("invalid proof length")]

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ed448"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ed448", "ff", "group"]
edition = "2021"
rust-version = "1.66"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true

View File

@@ -0,0 +1,21 @@
[package]
name = "generalized-bulletproofs-circuit-abstraction"
version = "0.1.0"
description = "An abstraction for arithmetic circuits over Generalized Bulletproofs"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/circuit-abstraction"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["bulletproofs", "circuit"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
generalized-bulletproofs = { path = "../generalized-bulletproofs" }

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,3 @@
# Generalized Bulletproofs Circuit Abstraction
A circuit abstraction around `generalized-bulletproofs`.

View File

@@ -0,0 +1,39 @@
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::*;
impl<C: Ciphersuite> Circuit<C> {
/// Constrain two linear combinations to be equal.
pub fn equality(&mut self, a: LinComb<C::F>, b: &LinComb<C::F>) {
self.constrain_equal_to_zero(a - b);
}
/// Calculate (and constrain) the inverse of a value.
///
/// A linear combination may optionally be passed as a constraint for the value being inverted.
/// A reference to the inverted value and its inverse is returned.
///
/// May panic if any linear combinations reference non-existent terms, the witness isn't provided
/// when proving/is provided when verifying, or if the witness is 0 (and accordingly doesn't have
/// an inverse).
pub fn inverse(
&mut self,
lincomb: Option<LinComb<C::F>>,
witness: Option<C::F>,
) -> (Variable, Variable) {
let (l, r, o) = self.mul(lincomb, None, witness.map(|f| (f, f.invert().unwrap())));
// The output of a value multiplied by its inverse is 1
// Constrain `1 o - 1 = 0`
self.constrain_equal_to_zero(LinComb::from(o).constant(-C::F::ONE));
(l, r)
}
/// Constrain two linear combinations as inequal.
///
/// May panic if any linear combinations reference non-existent terms.
pub fn inequality(&mut self, a: LinComb<C::F>, b: &LinComb<C::F>, witness: Option<(C::F, C::F)>) {
let l_constraint = a - b;
// The existence of a multiplicative inverse means a-b != 0, which means a != b
self.inverse(Some(l_constraint), witness.map(|(a, b)| a - b));
}
}

View File

@@ -0,0 +1,192 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![allow(non_snake_case)]
use zeroize::{Zeroize, ZeroizeOnDrop};
use ciphersuite::{
group::ff::{Field, PrimeField},
Ciphersuite,
};
use generalized_bulletproofs::{
ScalarVector, PedersenCommitment, PedersenVectorCommitment, ProofGenerators,
transcript::{Transcript as ProverTranscript, VerifierTranscript, Commitments},
arithmetic_circuit_proof::{AcError, ArithmeticCircuitStatement, ArithmeticCircuitWitness},
};
pub use generalized_bulletproofs::arithmetic_circuit_proof::{Variable, LinComb};
mod gadgets;
/// A trait for the transcript, whether proving for verifying, as necessary for sampling
/// challenges.
pub trait Transcript {
/// Sample a challenge from the transacript.
///
/// It is the caller's responsibility to have properly transcripted all variables prior to
/// sampling this challenge.
fn challenge<F: PrimeField>(&mut self) -> F;
}
impl Transcript for ProverTranscript {
fn challenge<F: PrimeField>(&mut self) -> F {
self.challenge()
}
}
impl Transcript for VerifierTranscript<'_> {
fn challenge<F: PrimeField>(&mut self) -> F {
self.challenge()
}
}
/// The witness for the satisfaction of this circuit.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
struct ProverData<C: Ciphersuite> {
aL: Vec<C::F>,
aR: Vec<C::F>,
C: Vec<PedersenVectorCommitment<C>>,
V: Vec<PedersenCommitment<C>>,
}
/// A struct representing a circuit.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Circuit<C: Ciphersuite> {
muls: usize,
// A series of linear combinations which must evaluate to 0.
constraints: Vec<LinComb<C::F>>,
prover: Option<ProverData<C>>,
}
impl<C: Ciphersuite> Circuit<C> {
/// Returns the amount of multiplications used by this circuit.
pub fn muls(&self) -> usize {
self.muls
}
/// Create an instance to prove satisfaction of a circuit with.
// TODO: Take the transcript here
#[allow(clippy::type_complexity)]
pub fn prove(
vector_commitments: Vec<PedersenVectorCommitment<C>>,
commitments: Vec<PedersenCommitment<C>>,
) -> Self {
Self {
muls: 0,
constraints: vec![],
prover: Some(ProverData { aL: vec![], aR: vec![], C: vector_commitments, V: commitments }),
}
}
/// Create an instance to verify a proof with.
// TODO: Take the transcript here
pub fn verify() -> Self {
Self { muls: 0, constraints: vec![], prover: None }
}
/// Evaluate a linear combination.
///
/// Yields WL aL + WR aR + WO aO + WCG CG + WCH CH + WV V + c.
///
/// May panic if the linear combination references non-existent terms.
///
/// Returns None if not a prover.
pub fn eval(&self, lincomb: &LinComb<C::F>) -> Option<C::F> {
self.prover.as_ref().map(|prover| {
let mut res = lincomb.c();
for (index, weight) in lincomb.WL() {
res += prover.aL[*index] * weight;
}
for (index, weight) in lincomb.WR() {
res += prover.aR[*index] * weight;
}
for (index, weight) in lincomb.WO() {
res += prover.aL[*index] * prover.aR[*index] * weight;
}
for (WCG, C) in lincomb.WCG().iter().zip(&prover.C) {
for (j, weight) in WCG {
res += C.g_values[*j] * weight;
}
}
for (WCH, C) in lincomb.WCH().iter().zip(&prover.C) {
for (j, weight) in WCH {
res += C.h_values[*j] * weight;
}
}
for (index, weight) in lincomb.WV() {
res += prover.V[*index].value * weight;
}
res
})
}
/// Multiply two values, optionally constrained, returning the constrainable left/right/out
/// terms.
///
/// May panic if any linear combinations reference non-existent terms or if the witness isn't
/// provided when proving/is provided when verifying.
pub fn mul(
&mut self,
a: Option<LinComb<C::F>>,
b: Option<LinComb<C::F>>,
witness: Option<(C::F, C::F)>,
) -> (Variable, Variable, Variable) {
let l = Variable::aL(self.muls);
let r = Variable::aR(self.muls);
let o = Variable::aO(self.muls);
self.muls += 1;
debug_assert_eq!(self.prover.is_some(), witness.is_some());
if let Some(witness) = witness {
let prover = self.prover.as_mut().unwrap();
prover.aL.push(witness.0);
prover.aR.push(witness.1);
}
if let Some(a) = a {
self.constrain_equal_to_zero(a.term(-C::F::ONE, l));
}
if let Some(b) = b {
self.constrain_equal_to_zero(b.term(-C::F::ONE, r));
}
(l, r, o)
}
/// Constrain a linear combination to be equal to 0.
///
/// May panic if the linear combination references non-existent terms.
pub fn constrain_equal_to_zero(&mut self, lincomb: LinComb<C::F>) {
self.constraints.push(lincomb);
}
/// Obtain the statement for this circuit.
///
/// If configured as the prover, the witness to use is also returned.
#[allow(clippy::type_complexity)]
pub fn statement(
self,
generators: ProofGenerators<'_, C>,
commitments: Commitments<C>,
) -> Result<(ArithmeticCircuitStatement<'_, C>, Option<ArithmeticCircuitWitness<C>>), AcError> {
let statement = ArithmeticCircuitStatement::new(generators, self.constraints, commitments)?;
let witness = self
.prover
.map(|mut prover| {
// We can't deconstruct the witness as it implements Drop (per ZeroizeOnDrop)
// Accordingly, we take the values within it and move forward with those
let mut aL = vec![];
std::mem::swap(&mut prover.aL, &mut aL);
let mut aR = vec![];
std::mem::swap(&mut prover.aR, &mut aR);
let mut C = vec![];
std::mem::swap(&mut prover.C, &mut C);
let mut V = vec![];
std::mem::swap(&mut prover.V, &mut V);
ArithmeticCircuitWitness::new(ScalarVector::from(aL), ScalarVector::from(aR), C, V)
})
.transpose()?;
Ok((statement, witness))
}
}

View File

@@ -0,0 +1,37 @@
[package]
name = "ec-divisors"
version = "0.1.0"
description = "A library for calculating elliptic curve divisors"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/divisors"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
subtle = { version = "2", default-features = false, features = ["std"] }
ff = { version = "0.13", default-features = false, features = ["std", "bits"] }
group = { version = "0.13", default-features = false }
hex = { version = "0.4", optional = true }
dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"], optional = true }
pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"], optional = true }
[dev-dependencies]
rand_core = { version = "0.6", features = ["getrandom"] }
hex = "0.4"
dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"] }
pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"] }
[features]
ed25519 = ["hex", "dalek-ff-group"]
pasta = ["pasta_curves"]

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023-2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,4 @@
# Elliptic Curve Divisors
An implementation of a representation for and construction of elliptic curve
divisors, intended for Eagen's [EC IP work](https://eprint.iacr.org/2022/596).

View File

@@ -0,0 +1,576 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![allow(non_snake_case)]
use subtle::{Choice, ConstantTimeEq, ConstantTimeGreater, ConditionallySelectable};
use zeroize::{Zeroize, ZeroizeOnDrop};
use group::{
ff::{Field, PrimeField, PrimeFieldBits},
Group,
};
mod poly;
pub use poly::Poly;
#[cfg(test)]
mod tests;
/// A curve usable with this library.
pub trait DivisorCurve: Group + ConstantTimeEq + ConditionallySelectable {
/// An element of the field this curve is defined over.
type FieldElement: Zeroize + PrimeField + ConditionallySelectable;
/// The A in the curve equation y^2 = x^3 + A x + B.
fn a() -> Self::FieldElement;
/// The B in the curve equation y^2 = x^3 + A x + B.
fn b() -> Self::FieldElement;
/// y^2 - x^3 - A x - B
///
/// Section 2 of the security proofs define this modulus.
///
/// This MUST NOT be overriden.
// TODO: Move to an extension trait
fn divisor_modulus() -> Poly<Self::FieldElement> {
Poly {
// 0 y**1, 1 y*2
y_coefficients: vec![Self::FieldElement::ZERO, Self::FieldElement::ONE],
yx_coefficients: vec![],
x_coefficients: vec![
// - A x
-Self::a(),
// 0 x^2
Self::FieldElement::ZERO,
// - x^3
-Self::FieldElement::ONE,
],
// - B
zero_coefficient: -Self::b(),
}
}
/// Convert a point to its x and y coordinates.
///
/// Returns None if passed the point at infinity.
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)>;
}
/// Calculate the slope and intercept between two points.
///
/// This function panics when `a @ infinity`, `b @ infinity`, `a == b`, or when `a == -b`.
pub(crate) fn slope_intercept<C: DivisorCurve>(a: C, b: C) -> (C::FieldElement, C::FieldElement) {
let (ax, ay) = C::to_xy(a).unwrap();
debug_assert_eq!(C::divisor_modulus().eval(ax, ay), C::FieldElement::ZERO);
let (bx, by) = C::to_xy(b).unwrap();
debug_assert_eq!(C::divisor_modulus().eval(bx, by), C::FieldElement::ZERO);
let slope = (by - ay) *
Option::<C::FieldElement>::from((bx - ax).invert())
.expect("trying to get slope/intercept of points sharing an x coordinate");
let intercept = by - (slope * bx);
debug_assert!(bool::from((ay - (slope * ax) - intercept).is_zero()));
debug_assert!(bool::from((by - (slope * bx) - intercept).is_zero()));
(slope, intercept)
}
// The line interpolating two points.
fn line<C: DivisorCurve>(a: C, b: C) -> Poly<C::FieldElement> {
#[derive(Clone, Copy)]
struct LinesRes<F: ConditionallySelectable> {
y_coefficient: F,
x_coefficient: F,
zero_coefficient: F,
}
impl<F: ConditionallySelectable> ConditionallySelectable for LinesRes<F> {
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
Self {
y_coefficient: <_>::conditional_select(&a.y_coefficient, &b.y_coefficient, choice),
x_coefficient: <_>::conditional_select(&a.x_coefficient, &b.x_coefficient, choice),
zero_coefficient: <_>::conditional_select(&a.zero_coefficient, &b.zero_coefficient, choice),
}
}
}
let a_is_identity = a.is_identity();
let b_is_identity = b.is_identity();
// If they're both the point at infinity, we simply set the line to one
let both_are_identity = a_is_identity & b_is_identity;
let if_both_are_identity = LinesRes {
y_coefficient: C::FieldElement::ZERO,
x_coefficient: C::FieldElement::ZERO,
zero_coefficient: C::FieldElement::ONE,
};
// If either point is the point at infinity, or these are additive inverses, the line is
// `1 * x - x`. The first `x` is a term in the polynomial, the `x` is the `x` coordinate of these
// points (of which there is one, as the second point is either at infinity or has a matching `x`
// coordinate).
let one_is_identity = a_is_identity | b_is_identity;
let additive_inverses = a.ct_eq(&-b);
let one_is_identity_or_additive_inverses = one_is_identity | additive_inverses;
let if_one_is_identity_or_additive_inverses = {
// If both are identity, set `a` to the generator so we can safely evaluate the following
// (which we won't select at the end of this function)
let a = <_>::conditional_select(&a, &C::generator(), both_are_identity);
// If `a` is identity, this selects `b`. If `a` isn't identity, this selects `a`
let non_identity = <_>::conditional_select(&a, &b, a.is_identity());
let (x, _) = C::to_xy(non_identity).unwrap();
LinesRes {
y_coefficient: C::FieldElement::ZERO,
x_coefficient: C::FieldElement::ONE,
zero_coefficient: -x,
}
};
// The following calculation assumes neither point is the point at infinity
// If either are, we use a prior result
// To ensure we can calculcate a result here, set any points at infinity to the generator
let a = <_>::conditional_select(&a, &C::generator(), a_is_identity);
let b = <_>::conditional_select(&b, &C::generator(), b_is_identity);
// It also assumes a, b aren't additive inverses which is also covered by a prior result
let b = <_>::conditional_select(&b, &a.double(), additive_inverses);
// If the points are equal, we use the line interpolating the sum of these points with the point
// at infinity
let b = <_>::conditional_select(&b, &-a.double(), a.ct_eq(&b));
let (slope, intercept) = slope_intercept::<C>(a, b);
// Section 4 of the proofs explicitly state the line `L = y - lambda * x - mu`
// y - (slope * x) - intercept
let mut res = LinesRes {
y_coefficient: C::FieldElement::ONE,
x_coefficient: -slope,
zero_coefficient: -intercept,
};
res = <_>::conditional_select(
&res,
&if_one_is_identity_or_additive_inverses,
one_is_identity_or_additive_inverses,
);
res = <_>::conditional_select(&res, &if_both_are_identity, both_are_identity);
Poly {
y_coefficients: vec![res.y_coefficient],
yx_coefficients: vec![],
x_coefficients: vec![res.x_coefficient],
zero_coefficient: res.zero_coefficient,
}
}
/// Create a divisor interpolating the following points.
///
/// Returns None if:
/// - No points were passed in
/// - The points don't sum to the point at infinity
/// - A passed in point was the point at infinity
///
/// If the arguments were valid, this function executes in an amount of time constant to the amount
/// of points.
#[allow(clippy::new_ret_no_self)]
pub fn new_divisor<C: DivisorCurve>(points: &[C]) -> Option<Poly<C::FieldElement>> {
// No points were passed in, this is the point at infinity, or the single point isn't infinity
// and accordingly doesn't sum to infinity. All three cause us to return None
// Checks a bit other than the first bit is set, meaning this is >= 2
let mut invalid_args = (points.len() & (!1)).ct_eq(&0);
// The points don't sum to the point at infinity
invalid_args |= !points.iter().sum::<C>().is_identity();
// A point was the point at identity
for point in points {
invalid_args |= point.is_identity();
}
if bool::from(invalid_args) {
None?;
}
let points_len = points.len();
// Create the initial set of divisors
let mut divs = vec![];
let mut iter = points.iter().copied();
while let Some(a) = iter.next() {
let b = iter.next();
// Draw the line between those points
// These unwraps are branching on the length of the iterator, not violating the constant-time
// priorites desired
divs.push((2, a + b.unwrap_or(C::identity()), line::<C>(a, b.unwrap_or(-a))));
}
let modulus = C::divisor_modulus();
// Our Poly algorithm is leaky and will create an excessive amount of y x**j and x**j
// coefficients which are zero, yet as our implementation is constant time, still come with
// an immense performance cost. This code truncates the coefficients we know are zero.
let trim = |divisor: &mut Poly<_>, points_len: usize| {
// We should only be trimming divisors reduced by the modulus
debug_assert!(divisor.yx_coefficients.len() <= 1);
if divisor.yx_coefficients.len() == 1 {
let truncate_to = ((points_len + 1) / 2).saturating_sub(2);
#[cfg(debug_assertions)]
for p in truncate_to .. divisor.yx_coefficients[0].len() {
debug_assert_eq!(divisor.yx_coefficients[0][p], <C::FieldElement as Field>::ZERO);
}
divisor.yx_coefficients[0].truncate(truncate_to);
}
{
let truncate_to = points_len / 2;
#[cfg(debug_assertions)]
for p in truncate_to .. divisor.x_coefficients.len() {
debug_assert_eq!(divisor.x_coefficients[p], <C::FieldElement as Field>::ZERO);
}
divisor.x_coefficients.truncate(truncate_to);
}
};
// Pair them off until only one remains
while divs.len() > 1 {
let mut next_divs = vec![];
// If there's an odd amount of divisors, carry the odd one out to the next iteration
if (divs.len() % 2) == 1 {
next_divs.push(divs.pop().unwrap());
}
while let Some((a_points, a, a_div)) = divs.pop() {
let (b_points, b, b_div) = divs.pop().unwrap();
let points = a_points + b_points;
// Merge the two divisors
let numerator = a_div.mul_mod(&b_div, &modulus).mul_mod(&line::<C>(a, b), &modulus);
let denominator = line::<C>(a, -a).mul_mod(&line::<C>(b, -b), &modulus);
let (mut q, r) = numerator.div_rem(&denominator);
debug_assert_eq!(r, Poly::zero());
trim(&mut q, 1 + points);
next_divs.push((points, a + b, q));
}
divs = next_divs;
}
// Return the unified divisor
let mut divisor = divs.remove(0).2;
trim(&mut divisor, points_len);
Some(divisor)
}
/// The decomposition of a scalar.
///
/// The decomposition ($d$) of a scalar ($s$) has the following two properties:
///
/// - $\sum^{\mathsf{NUM_BITS} - 1}_{i=0} d_i * 2^i = s$
/// - $\sum^{\mathsf{NUM_BITS} - 1}_{i=0} d_i = \mathsf{NUM_BITS}$
#[derive(Clone, Zeroize, ZeroizeOnDrop)]
pub struct ScalarDecomposition<F: Zeroize + PrimeFieldBits> {
scalar: F,
decomposition: Vec<u64>,
}
impl<F: Zeroize + PrimeFieldBits> ScalarDecomposition<F> {
/// Decompose a scalar.
pub fn new(scalar: F) -> Self {
/*
We need the sum of the coefficients to equal F::NUM_BITS. The scalar's bits will be less than
F::NUM_BITS. Accordingly, we need to increment the sum of the coefficients without
incrementing the scalar represented. We do this by finding the highest non-0 coefficient,
decrementing it, and increasing the immediately less significant coefficient by 2. This
increases the sum of the coefficients by 1 (-1+2=1).
*/
let num_bits = u64::from(F::NUM_BITS);
// Obtain the bits of the scalar
let num_bits_usize = usize::try_from(num_bits).unwrap();
let mut decomposition = vec![0; num_bits_usize];
for (i, bit) in scalar.to_le_bits().into_iter().take(num_bits_usize).enumerate() {
let bit = u64::from(u8::from(bit));
decomposition[i] = bit;
}
// The following algorithm only works if the value of the scalar exceeds num_bits
// If it isn't, we increase it by the modulus such that it does exceed num_bits
{
let mut less_than_num_bits = Choice::from(0);
for i in 0 .. num_bits {
less_than_num_bits |= scalar.ct_eq(&F::from(i));
}
let mut decomposition_of_modulus = vec![0; num_bits_usize];
// Decompose negative one
for (i, bit) in (-F::ONE).to_le_bits().into_iter().take(num_bits_usize).enumerate() {
let bit = u64::from(u8::from(bit));
decomposition_of_modulus[i] = bit;
}
// Increment it by one
decomposition_of_modulus[0] += 1;
// Add the decomposition onto the decomposition of the modulus
for i in 0 .. num_bits_usize {
let new_decomposition = <_>::conditional_select(
&decomposition[i],
&(decomposition[i] + decomposition_of_modulus[i]),
less_than_num_bits,
);
decomposition[i] = new_decomposition;
}
}
// Calculcate the sum of the coefficients
let mut sum_of_coefficients: u64 = 0;
for decomposition in &decomposition {
sum_of_coefficients += *decomposition;
}
/*
Now, because we added a log2(k)-bit number to a k-bit number, we may have our sum of
coefficients be *too high*. We attempt to reduce the sum of the coefficients accordingly.
This algorithm is guaranteed to complete as expected. Take the sequence `222`. `222` becomes
`032` becomes `013`. Even if the next coefficient in the sequence is `2`, the third
coefficient will be reduced once and the next coefficient (`2`, increased to `3`) will only
be eligible for reduction once. This demonstrates, even for a worst case of log2(k) `2`s
followed by `1`s (as possible if the modulus is a Mersenne prime), the log2(k) `2`s can be
reduced as necessary so long as there is a single coefficient after (requiring the entire
sequence be at least of length log2(k) + 1). For a 2-bit number, log2(k) + 1 == 2, so this
holds for any odd prime field.
To fully type out the demonstration for the Mersenne prime 3, with scalar to encode 1 (the
highest value less than the number of bits):
10 - Little-endian bits of 1
21 - Little-endian bits of 1, plus the modulus
02 - After one reduction, where the sum of the coefficients does in fact equal 2 (the target)
*/
{
let mut log2_num_bits = 0;
while (1 << log2_num_bits) < num_bits {
log2_num_bits += 1;
}
for _ in 0 .. log2_num_bits {
// If the sum of coefficients is the amount of bits, we're done
let mut done = sum_of_coefficients.ct_eq(&num_bits);
for i in 0 .. (num_bits_usize - 1) {
let should_act = (!done) & decomposition[i].ct_gt(&1);
// Subtract 2 from this coefficient
let amount_to_sub = <_>::conditional_select(&0, &2, should_act);
decomposition[i] -= amount_to_sub;
// Add 1 to the next coefficient
let amount_to_add = <_>::conditional_select(&0, &1, should_act);
decomposition[i + 1] += amount_to_add;
// Also update the sum of coefficients
sum_of_coefficients -= <_>::conditional_select(&0, &1, should_act);
// If we updated the coefficients this loop iter, we're done for this loop iter
done |= should_act;
}
}
}
for _ in 0 .. num_bits {
// If the sum of coefficients is the amount of bits, we're done
let mut done = sum_of_coefficients.ct_eq(&num_bits);
// Find the highest coefficient currently non-zero
for i in (1 .. decomposition.len()).rev() {
// If this is non-zero, we should decrement this coefficient if we haven't already
// decremented a coefficient this round
let is_non_zero = !(0.ct_eq(&decomposition[i]));
let should_act = (!done) & is_non_zero;
// Update this coefficient and the prior coefficient
let amount_to_sub = <_>::conditional_select(&0, &1, should_act);
decomposition[i] -= amount_to_sub;
let amount_to_add = <_>::conditional_select(&0, &2, should_act);
// i must be at least 1, so i - 1 will be at least 0 (meaning it's safe to index with)
decomposition[i - 1] += amount_to_add;
// Also update the sum of coefficients
sum_of_coefficients += <_>::conditional_select(&0, &1, should_act);
// If we updated the coefficients this loop iter, we're done for this loop iter
done |= should_act;
}
}
debug_assert!(bool::from(decomposition.iter().sum::<u64>().ct_eq(&num_bits)));
ScalarDecomposition { scalar, decomposition }
}
/// The decomposition of the scalar.
pub fn decomposition(&self) -> &[u64] {
&self.decomposition
}
/// A divisor to prove a scalar multiplication.
///
/// The divisor will interpolate $-(s \cdot G)$ with $d_i$ instances of $2^i \cdot G$.
///
/// This function executes in constant time with regards to the scalar.
///
/// This function MAY panic if this scalar is zero.
pub fn scalar_mul_divisor<C: Zeroize + DivisorCurve<Scalar = F>>(
&self,
mut generator: C,
) -> Poly<C::FieldElement> {
// 1 is used for the resulting point, NUM_BITS is used for the decomposition, and then we store
// one additional index in a usize for the points we shouldn't write at all (hence the +2)
let _ = usize::try_from(<C::Scalar as PrimeField>::NUM_BITS + 2)
.expect("NUM_BITS + 2 didn't fit in usize");
let mut divisor_points =
vec![C::identity(); (<C::Scalar as PrimeField>::NUM_BITS + 1) as usize];
// Write the inverse of the resulting point
divisor_points[0] = -generator * self.scalar;
// Write the decomposition
let mut write_to: u32 = 1;
for coefficient in &self.decomposition {
let mut coefficient = *coefficient;
// Iterate over the maximum amount of iters for this value to be constant time regardless of
// any branch prediction algorithms
for _ in 0 .. <C::Scalar as PrimeField>::NUM_BITS {
// Write the generator to the slot we're supposed to
/*
Without this loop, we'd increment this dependent on the distribution within the
decomposition. If the distribution is bottom-heavy, we won't access the tail of
`divisor_points` for a while, risking it being ejected out of the cache (causing a cache
miss which may not occur with a top-heavy distribution which quickly moves to the tail).
This is O(log2(NUM_BITS) ** 3) though, as this the third loop, which is horrific.
*/
for i in 1 ..= <C::Scalar as PrimeField>::NUM_BITS {
divisor_points[i as usize] =
<_>::conditional_select(&divisor_points[i as usize], &generator, i.ct_eq(&write_to));
}
// If the coefficient isn't zero, increment write_to (so we don't overwrite this generator
// when it should be there)
let coefficient_not_zero = !coefficient.ct_eq(&0);
write_to = <_>::conditional_select(&write_to, &(write_to + 1), coefficient_not_zero);
// Subtract one from the coefficient, if it's not zero and won't underflow
coefficient =
<_>::conditional_select(&coefficient, &coefficient.wrapping_sub(1), coefficient_not_zero);
}
generator = generator.double();
}
// Create a divisor out of all points except the last point which is solely scratch
let res = new_divisor(&divisor_points).unwrap();
divisor_points.zeroize();
res
}
}
#[cfg(any(test, feature = "pasta"))]
mod pasta {
use group::{ff::Field, Curve};
use pasta_curves::{
arithmetic::{Coordinates, CurveAffine},
Ep, Fp, Eq, Fq,
};
use crate::DivisorCurve;
impl DivisorCurve for Ep {
type FieldElement = Fp;
fn a() -> Self::FieldElement {
Self::FieldElement::ZERO
}
fn b() -> Self::FieldElement {
Self::FieldElement::from(5u64)
}
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
Option::<Coordinates<_>>::from(point.to_affine().coordinates())
.map(|coords| (*coords.x(), *coords.y()))
}
}
impl DivisorCurve for Eq {
type FieldElement = Fq;
fn a() -> Self::FieldElement {
Self::FieldElement::ZERO
}
fn b() -> Self::FieldElement {
Self::FieldElement::from(5u64)
}
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
Option::<Coordinates<_>>::from(point.to_affine().coordinates())
.map(|coords| (*coords.x(), *coords.y()))
}
}
}
#[cfg(any(test, feature = "ed25519"))]
mod ed25519 {
use group::{
ff::{Field, PrimeField},
Group, GroupEncoding,
};
use dalek_ff_group::{FieldElement, EdwardsPoint};
impl crate::DivisorCurve for EdwardsPoint {
type FieldElement = FieldElement;
// Wei25519 a/b
// https://www.ietf.org/archive/id/draft-ietf-lwig-curve-representations-02.pdf E.3
fn a() -> Self::FieldElement {
let mut be_bytes =
hex::decode("2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144").unwrap();
be_bytes.reverse();
let le_bytes = be_bytes;
Self::FieldElement::from_repr(le_bytes.try_into().unwrap()).unwrap()
}
fn b() -> Self::FieldElement {
let mut be_bytes =
hex::decode("7b425ed097b425ed097b425ed097b425ed097b425ed097b4260b5e9c7710c864").unwrap();
be_bytes.reverse();
let le_bytes = be_bytes;
Self::FieldElement::from_repr(le_bytes.try_into().unwrap()).unwrap()
}
// https://www.ietf.org/archive/id/draft-ietf-lwig-curve-representations-02.pdf E.2
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
if bool::from(point.is_identity()) {
None?;
}
// Extract the y coordinate from the compressed point
let mut edwards_y = point.to_bytes();
let x_is_odd = edwards_y[31] >> 7;
edwards_y[31] &= (1 << 7) - 1;
let edwards_y = Self::FieldElement::from_repr(edwards_y).unwrap();
// Recover the x coordinate
let edwards_y_sq = edwards_y * edwards_y;
let D = -Self::FieldElement::from(121665u64) *
Self::FieldElement::from(121666u64).invert().unwrap();
let mut edwards_x = ((edwards_y_sq - Self::FieldElement::ONE) *
((D * edwards_y_sq) + Self::FieldElement::ONE).invert().unwrap())
.sqrt()
.unwrap();
if u8::from(bool::from(edwards_x.is_odd())) != x_is_odd {
edwards_x = -edwards_x;
}
// Calculate the x and y coordinates for Wei25519
let edwards_y_plus_one = Self::FieldElement::ONE + edwards_y;
let one_minus_edwards_y = Self::FieldElement::ONE - edwards_y;
let wei_x = (edwards_y_plus_one * one_minus_edwards_y.invert().unwrap()) +
(Self::FieldElement::from(486662u64) * Self::FieldElement::from(3u64).invert().unwrap());
let c =
(-(Self::FieldElement::from(486662u64) + Self::FieldElement::from(2u64))).sqrt().unwrap();
let wei_y = c * edwards_y_plus_one * (one_minus_edwards_y * edwards_x).invert().unwrap();
Some((wei_x, wei_y))
}
}
}

View File

@@ -0,0 +1,743 @@
use core::ops::{Add, Neg, Sub, Mul, Rem};
use subtle::{Choice, ConstantTimeEq, ConstantTimeGreater, ConditionallySelectable};
use zeroize::{Zeroize, ZeroizeOnDrop};
use group::ff::PrimeField;
#[derive(Clone, Copy, PartialEq, Debug)]
struct CoefficientIndex {
y_pow: u64,
x_pow: u64,
}
impl ConditionallySelectable for CoefficientIndex {
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
Self {
y_pow: <_>::conditional_select(&a.y_pow, &b.y_pow, choice),
x_pow: <_>::conditional_select(&a.x_pow, &b.x_pow, choice),
}
}
}
impl ConstantTimeEq for CoefficientIndex {
fn ct_eq(&self, other: &Self) -> Choice {
self.y_pow.ct_eq(&other.y_pow) & self.x_pow.ct_eq(&other.x_pow)
}
}
impl ConstantTimeGreater for CoefficientIndex {
fn ct_gt(&self, other: &Self) -> Choice {
self.y_pow.ct_gt(&other.y_pow) |
(self.y_pow.ct_eq(&other.y_pow) & self.x_pow.ct_gt(&other.x_pow))
}
}
/// A structure representing a Polynomial with x^i, y^i, and y^i * x^j terms.
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub struct Poly<F: From<u64> + Zeroize + PrimeField> {
/// c\[i] * y^(i + 1)
pub y_coefficients: Vec<F>,
/// c\[i]\[j] * y^(i + 1) x^(j + 1)
pub yx_coefficients: Vec<Vec<F>>,
/// c\[i] * x^(i + 1)
pub x_coefficients: Vec<F>,
/// Coefficient for x^0, y^0, and x^0 y^0 (the coefficient for 1)
pub zero_coefficient: F,
}
impl<F: From<u64> + Zeroize + PrimeField> PartialEq for Poly<F> {
// This is not constant time and is not meant to be
fn eq(&self, b: &Poly<F>) -> bool {
{
let mutual_y_coefficients = self.y_coefficients.len().min(b.y_coefficients.len());
if self.y_coefficients[.. mutual_y_coefficients] != b.y_coefficients[.. mutual_y_coefficients]
{
return false;
}
for coeff in &self.y_coefficients[mutual_y_coefficients ..] {
if *coeff != F::ZERO {
return false;
}
}
for coeff in &b.y_coefficients[mutual_y_coefficients ..] {
if *coeff != F::ZERO {
return false;
}
}
}
{
for (i, yx_coeffs) in self.yx_coefficients.iter().enumerate() {
for (j, coeff) in yx_coeffs.iter().enumerate() {
if coeff != b.yx_coefficients.get(i).unwrap_or(&vec![]).get(j).unwrap_or(&F::ZERO) {
return false;
}
}
}
// Run from the other perspective in case other is longer than self
for (i, yx_coeffs) in b.yx_coefficients.iter().enumerate() {
for (j, coeff) in yx_coeffs.iter().enumerate() {
if coeff != self.yx_coefficients.get(i).unwrap_or(&vec![]).get(j).unwrap_or(&F::ZERO) {
return false;
}
}
}
}
{
let mutual_x_coefficients = self.x_coefficients.len().min(b.x_coefficients.len());
if self.x_coefficients[.. mutual_x_coefficients] != b.x_coefficients[.. mutual_x_coefficients]
{
return false;
}
for coeff in &self.x_coefficients[mutual_x_coefficients ..] {
if *coeff != F::ZERO {
return false;
}
}
for coeff in &b.x_coefficients[mutual_x_coefficients ..] {
if *coeff != F::ZERO {
return false;
}
}
}
self.zero_coefficient == b.zero_coefficient
}
}
impl<F: From<u64> + Zeroize + PrimeField> Poly<F> {
/// A polynomial for zero.
pub(crate) fn zero() -> Self {
Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: vec![],
zero_coefficient: F::ZERO,
}
}
}
impl<F: From<u64> + Zeroize + PrimeField> Add<&Self> for Poly<F> {
type Output = Self;
fn add(mut self, other: &Self) -> Self {
// Expand to be the neeeded size
while self.y_coefficients.len() < other.y_coefficients.len() {
self.y_coefficients.push(F::ZERO);
}
while self.yx_coefficients.len() < other.yx_coefficients.len() {
self.yx_coefficients.push(vec![]);
}
for i in 0 .. other.yx_coefficients.len() {
while self.yx_coefficients[i].len() < other.yx_coefficients[i].len() {
self.yx_coefficients[i].push(F::ZERO);
}
}
while self.x_coefficients.len() < other.x_coefficients.len() {
self.x_coefficients.push(F::ZERO);
}
// Perform the addition
for (i, coeff) in other.y_coefficients.iter().enumerate() {
self.y_coefficients[i] += coeff;
}
for (i, coeffs) in other.yx_coefficients.iter().enumerate() {
for (j, coeff) in coeffs.iter().enumerate() {
self.yx_coefficients[i][j] += coeff;
}
}
for (i, coeff) in other.x_coefficients.iter().enumerate() {
self.x_coefficients[i] += coeff;
}
self.zero_coefficient += other.zero_coefficient;
self
}
}
impl<F: From<u64> + Zeroize + PrimeField> Neg for Poly<F> {
type Output = Self;
fn neg(mut self) -> Self {
for y_coeff in self.y_coefficients.iter_mut() {
*y_coeff = -*y_coeff;
}
for yx_coeffs in self.yx_coefficients.iter_mut() {
for yx_coeff in yx_coeffs.iter_mut() {
*yx_coeff = -*yx_coeff;
}
}
for x_coeff in self.x_coefficients.iter_mut() {
*x_coeff = -*x_coeff;
}
self.zero_coefficient = -self.zero_coefficient;
self
}
}
impl<F: From<u64> + Zeroize + PrimeField> Sub for Poly<F> {
type Output = Self;
fn sub(self, other: Self) -> Self {
self + &-other
}
}
impl<F: From<u64> + Zeroize + PrimeField> Mul<F> for Poly<F> {
type Output = Self;
fn mul(mut self, scalar: F) -> Self {
for y_coeff in self.y_coefficients.iter_mut() {
*y_coeff *= scalar;
}
for coeffs in self.yx_coefficients.iter_mut() {
for coeff in coeffs.iter_mut() {
*coeff *= scalar;
}
}
for x_coeff in self.x_coefficients.iter_mut() {
*x_coeff *= scalar;
}
self.zero_coefficient *= scalar;
self
}
}
impl<F: From<u64> + Zeroize + PrimeField> Poly<F> {
#[must_use]
fn shift_by_x(mut self, power_of_x: usize) -> Self {
if power_of_x == 0 {
return self;
}
// Shift up every x coefficient
for _ in 0 .. power_of_x {
self.x_coefficients.insert(0, F::ZERO);
for yx_coeffs in &mut self.yx_coefficients {
yx_coeffs.insert(0, F::ZERO);
}
}
// Move the zero coefficient
self.x_coefficients[power_of_x - 1] = self.zero_coefficient;
self.zero_coefficient = F::ZERO;
// Move the y coefficients
// Start by creating yx coefficients with the necessary powers of x
let mut yx_coefficients_to_push = vec![];
while yx_coefficients_to_push.len() < power_of_x {
yx_coefficients_to_push.push(F::ZERO);
}
// Now, ensure the yx coefficients has the slots for the y coefficients we're moving
while self.yx_coefficients.len() < self.y_coefficients.len() {
self.yx_coefficients.push(yx_coefficients_to_push.clone());
}
// Perform the move
for (i, y_coeff) in self.y_coefficients.drain(..).enumerate() {
self.yx_coefficients[i][power_of_x - 1] = y_coeff;
}
self
}
#[must_use]
fn shift_by_y(mut self, power_of_y: usize) -> Self {
if power_of_y == 0 {
return self;
}
// Shift up every y coefficient
for _ in 0 .. power_of_y {
self.y_coefficients.insert(0, F::ZERO);
self.yx_coefficients.insert(0, vec![]);
}
// Move the zero coefficient
self.y_coefficients[power_of_y - 1] = self.zero_coefficient;
self.zero_coefficient = F::ZERO;
// Move the x coefficients
std::mem::swap(&mut self.yx_coefficients[power_of_y - 1], &mut self.x_coefficients);
self.x_coefficients = vec![];
self
}
}
impl<F: From<u64> + Zeroize + PrimeField> Mul<&Poly<F>> for Poly<F> {
type Output = Self;
fn mul(self, other: &Self) -> Self {
let mut res = self.clone() * other.zero_coefficient;
for (i, y_coeff) in other.y_coefficients.iter().enumerate() {
let scaled = self.clone() * *y_coeff;
res = res + &scaled.shift_by_y(i + 1);
}
for (y_i, yx_coeffs) in other.yx_coefficients.iter().enumerate() {
for (x_i, yx_coeff) in yx_coeffs.iter().enumerate() {
let scaled = self.clone() * *yx_coeff;
res = res + &scaled.shift_by_y(y_i + 1).shift_by_x(x_i + 1);
}
}
for (i, x_coeff) in other.x_coefficients.iter().enumerate() {
let scaled = self.clone() * *x_coeff;
res = res + &scaled.shift_by_x(i + 1);
}
res
}
}
impl<F: From<u64> + Zeroize + PrimeField> Poly<F> {
// The leading y coefficient and associated x coefficient.
fn leading_coefficient(&self) -> (usize, usize) {
if self.y_coefficients.len() > self.yx_coefficients.len() {
(self.y_coefficients.len(), 0)
} else if !self.yx_coefficients.is_empty() {
(self.yx_coefficients.len(), self.yx_coefficients.last().unwrap().len())
} else {
(0, self.x_coefficients.len())
}
}
/// Returns the highest non-zero coefficient greater than the specified coefficient.
///
/// If no non-zero coefficient is greater than the specified coefficient, this will return
/// (0, 0).
fn greater_than_or_equal_coefficient(
&self,
greater_than_or_equal: &CoefficientIndex,
) -> CoefficientIndex {
let mut leading_coefficient = CoefficientIndex { y_pow: 0, x_pow: 0 };
for (y_pow_sub_one, coeff) in self.y_coefficients.iter().enumerate() {
let y_pow = u64::try_from(y_pow_sub_one + 1).unwrap();
let coeff_is_non_zero = !coeff.is_zero();
let potential = CoefficientIndex { y_pow, x_pow: 0 };
leading_coefficient = <_>::conditional_select(
&leading_coefficient,
&potential,
coeff_is_non_zero &
potential.ct_gt(&leading_coefficient) &
(potential.ct_gt(greater_than_or_equal) | potential.ct_eq(greater_than_or_equal)),
);
}
for (y_pow_sub_one, yx_coefficients) in self.yx_coefficients.iter().enumerate() {
let y_pow = u64::try_from(y_pow_sub_one + 1).unwrap();
for (x_pow_sub_one, coeff) in yx_coefficients.iter().enumerate() {
let x_pow = u64::try_from(x_pow_sub_one + 1).unwrap();
let coeff_is_non_zero = !coeff.is_zero();
let potential = CoefficientIndex { y_pow, x_pow };
leading_coefficient = <_>::conditional_select(
&leading_coefficient,
&potential,
coeff_is_non_zero &
potential.ct_gt(&leading_coefficient) &
(potential.ct_gt(greater_than_or_equal) | potential.ct_eq(greater_than_or_equal)),
);
}
}
for (x_pow_sub_one, coeff) in self.x_coefficients.iter().enumerate() {
let x_pow = u64::try_from(x_pow_sub_one + 1).unwrap();
let coeff_is_non_zero = !coeff.is_zero();
let potential = CoefficientIndex { y_pow: 0, x_pow };
leading_coefficient = <_>::conditional_select(
&leading_coefficient,
&potential,
coeff_is_non_zero &
potential.ct_gt(&leading_coefficient) &
(potential.ct_gt(greater_than_or_equal) | potential.ct_eq(greater_than_or_equal)),
);
}
leading_coefficient
}
/// Perform multiplication mod `modulus`.
#[must_use]
pub(crate) fn mul_mod(self, other: &Self, modulus: &Self) -> Self {
(self * other) % modulus
}
/// Perform division, returning the result and remainder.
///
/// This function is constant time to the structure of the numerator and denominator. The actual
/// value of the coefficients will not introduce timing differences.
///
/// Panics upon division by a polynomial where all coefficients are zero.
#[must_use]
pub(crate) fn div_rem(self, denominator: &Self) -> (Self, Self) {
// These functions have undefined behavior if this isn't a valid index for this poly
fn ct_get<F: From<u64> + Zeroize + PrimeField>(poly: &Poly<F>, index: CoefficientIndex) -> F {
let mut res = poly.zero_coefficient;
for (y_pow_sub_one, coeff) in poly.y_coefficients.iter().enumerate() {
res = <_>::conditional_select(
&res,
coeff,
index
.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 }),
);
}
for (y_pow_sub_one, coeffs) in poly.yx_coefficients.iter().enumerate() {
for (x_pow_sub_one, coeff) in coeffs.iter().enumerate() {
res = <_>::conditional_select(
&res,
coeff,
index.ct_eq(&CoefficientIndex {
y_pow: (y_pow_sub_one + 1).try_into().unwrap(),
x_pow: (x_pow_sub_one + 1).try_into().unwrap(),
}),
);
}
}
for (x_pow_sub_one, coeff) in poly.x_coefficients.iter().enumerate() {
res = <_>::conditional_select(
&res,
coeff,
index
.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() }),
);
}
res
}
fn ct_set<F: From<u64> + Zeroize + PrimeField>(
poly: &mut Poly<F>,
index: CoefficientIndex,
value: F,
) {
for (y_pow_sub_one, coeff) in poly.y_coefficients.iter_mut().enumerate() {
*coeff = <_>::conditional_select(
coeff,
&value,
index
.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 }),
);
}
for (y_pow_sub_one, coeffs) in poly.yx_coefficients.iter_mut().enumerate() {
for (x_pow_sub_one, coeff) in coeffs.iter_mut().enumerate() {
*coeff = <_>::conditional_select(
coeff,
&value,
index.ct_eq(&CoefficientIndex {
y_pow: (y_pow_sub_one + 1).try_into().unwrap(),
x_pow: (x_pow_sub_one + 1).try_into().unwrap(),
}),
);
}
}
for (x_pow_sub_one, coeff) in poly.x_coefficients.iter_mut().enumerate() {
*coeff = <_>::conditional_select(
coeff,
&value,
index
.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() }),
);
}
poly.zero_coefficient = <_>::conditional_select(
&poly.zero_coefficient,
&value,
index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }),
);
}
fn conditional_select_poly<F: From<u64> + Zeroize + PrimeField>(
mut a: Poly<F>,
mut b: Poly<F>,
choice: Choice,
) -> Poly<F> {
let pad_to = |a: &mut Poly<F>, b: &Poly<F>| {
while a.x_coefficients.len() < b.x_coefficients.len() {
a.x_coefficients.push(F::ZERO);
}
while a.yx_coefficients.len() < b.yx_coefficients.len() {
a.yx_coefficients.push(vec![]);
}
for (a, b) in a.yx_coefficients.iter_mut().zip(&b.yx_coefficients) {
while a.len() < b.len() {
a.push(F::ZERO);
}
}
while a.y_coefficients.len() < b.y_coefficients.len() {
a.y_coefficients.push(F::ZERO);
}
};
// Pad these to be the same size/layout as each other
pad_to(&mut a, &b);
pad_to(&mut b, &a);
let mut res = Poly::zero();
for (a, b) in a.y_coefficients.iter().zip(&b.y_coefficients) {
res.y_coefficients.push(<_>::conditional_select(a, b, choice));
}
for (a, b) in a.yx_coefficients.iter().zip(&b.yx_coefficients) {
let mut yx_coefficients = Vec::with_capacity(a.len());
for (a, b) in a.iter().zip(b) {
yx_coefficients.push(<_>::conditional_select(a, b, choice))
}
res.yx_coefficients.push(yx_coefficients);
}
for (a, b) in a.x_coefficients.iter().zip(&b.x_coefficients) {
res.x_coefficients.push(<_>::conditional_select(a, b, choice));
}
res.zero_coefficient =
<_>::conditional_select(&a.zero_coefficient, &b.zero_coefficient, choice);
res
}
// The following long division algorithm only works if the denominator actually has a variable
// If the denominator isn't variable to anything, short-circuit to scalar 'division'
// This is safe as `leading_coefficient` is based on the structure, not the values, of the poly
let denominator_leading_coefficient = denominator.leading_coefficient();
if denominator_leading_coefficient == (0, 0) {
return (self * denominator.zero_coefficient.invert().unwrap(), Poly::zero());
}
// The structure of the quotient, which is the the numerator with all coefficients set to 0
let mut quotient_structure = Poly {
y_coefficients: vec![F::ZERO; self.y_coefficients.len()],
yx_coefficients: self.yx_coefficients.clone(),
x_coefficients: vec![F::ZERO; self.x_coefficients.len()],
zero_coefficient: F::ZERO,
};
for coeff in quotient_structure
.yx_coefficients
.iter_mut()
.flat_map(|yx_coefficients| yx_coefficients.iter_mut())
{
*coeff = F::ZERO;
}
// Calculate the amount of iterations we need to perform
let iterations = self.y_coefficients.len() +
self.yx_coefficients.iter().map(|yx_coefficients| yx_coefficients.len()).sum::<usize>() +
self.x_coefficients.len();
// Find the highest non-zero coefficient in the denominator
// This is the coefficient which we actually perform division with
let denominator_dividing_coefficient =
denominator.greater_than_or_equal_coefficient(&CoefficientIndex { y_pow: 0, x_pow: 0 });
let denominator_dividing_coefficient_inv =
ct_get(denominator, denominator_dividing_coefficient).invert().unwrap();
let mut quotient = quotient_structure.clone();
let mut remainder = self.clone();
for _ in 0 .. iterations {
// Find the numerator coefficient we're clearing
// This will be (0, 0) if we aren't clearing a coefficient
let numerator_coefficient =
remainder.greater_than_or_equal_coefficient(&denominator_dividing_coefficient);
// We only apply the effects of this iteration if the numerator's coefficient is actually >=
let meaningful_iteration = numerator_coefficient.ct_gt(&denominator_dividing_coefficient) |
numerator_coefficient.ct_eq(&denominator_dividing_coefficient);
// 1) Find the scalar `q` such that the leading coefficient of `q * denominator` is equal to
// the leading coefficient of self.
let numerator_coefficient_value = ct_get(&remainder, numerator_coefficient);
let q = numerator_coefficient_value * denominator_dividing_coefficient_inv;
// 2) Calculate the full term of the quotient by scaling with the necessary powers of y/x
let proper_powers_of_yx = CoefficientIndex {
y_pow: numerator_coefficient.y_pow.wrapping_sub(denominator_dividing_coefficient.y_pow),
x_pow: numerator_coefficient.x_pow.wrapping_sub(denominator_dividing_coefficient.x_pow),
};
let fallabck_powers_of_yx = CoefficientIndex { y_pow: 0, x_pow: 0 };
let mut quotient_term = quotient_structure.clone();
ct_set(
&mut quotient_term,
// If the numerator coefficient isn't >=, proper_powers_of_yx will have garbage in them
<_>::conditional_select(&fallabck_powers_of_yx, &proper_powers_of_yx, meaningful_iteration),
q,
);
let quotient_if_meaningful = quotient.clone() + &quotient_term;
quotient = conditional_select_poly(quotient, quotient_if_meaningful, meaningful_iteration);
// 3) Remove what we've divided out from self
let remainder_if_meaningful = remainder.clone() - (quotient_term * denominator);
remainder = conditional_select_poly(remainder, remainder_if_meaningful, meaningful_iteration);
}
quotient = conditional_select_poly(
quotient,
// If the dividing coefficient was for y**0 x**0, we return the poly scaled by its inverse
self.clone() * denominator_dividing_coefficient_inv,
denominator_dividing_coefficient.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }),
);
remainder = conditional_select_poly(
remainder,
// If the dividing coefficient was for y**0 x**0, we're able to perfectly divide and there's
// no remainder
Poly::zero(),
denominator_dividing_coefficient.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }),
);
// Clear any junk terms out of the remainder which are less than the denominator
let denominator_leading_coefficient = CoefficientIndex {
y_pow: denominator_leading_coefficient.0.try_into().unwrap(),
x_pow: denominator_leading_coefficient.1.try_into().unwrap(),
};
if denominator_leading_coefficient != (CoefficientIndex { y_pow: 0, x_pow: 0 }) {
while {
let index =
CoefficientIndex { y_pow: remainder.y_coefficients.len().try_into().unwrap(), x_pow: 0 };
bool::from(
index.ct_gt(&denominator_leading_coefficient) |
index.ct_eq(&denominator_leading_coefficient),
)
} {
let popped = remainder.y_coefficients.pop();
debug_assert_eq!(popped, Some(F::ZERO));
}
while {
let index = CoefficientIndex {
y_pow: remainder.yx_coefficients.len().try_into().unwrap(),
x_pow: remainder
.yx_coefficients
.last()
.map(|yx_coefficients| yx_coefficients.len())
.unwrap_or(0)
.try_into()
.unwrap(),
};
bool::from(
index.ct_gt(&denominator_leading_coefficient) |
index.ct_eq(&denominator_leading_coefficient),
)
} {
let popped = remainder.yx_coefficients.last_mut().unwrap().pop();
// This may have been `vec![]`
if let Some(popped) = popped {
debug_assert_eq!(popped, F::ZERO);
}
if remainder.yx_coefficients.last().unwrap().is_empty() {
let popped = remainder.yx_coefficients.pop();
debug_assert_eq!(popped, Some(vec![]));
}
}
while {
let index =
CoefficientIndex { y_pow: 0, x_pow: remainder.x_coefficients.len().try_into().unwrap() };
bool::from(
index.ct_gt(&denominator_leading_coefficient) |
index.ct_eq(&denominator_leading_coefficient),
)
} {
let popped = remainder.x_coefficients.pop();
debug_assert_eq!(popped, Some(F::ZERO));
}
}
(quotient, remainder)
}
}
impl<F: From<u64> + Zeroize + PrimeField> Rem<&Self> for Poly<F> {
type Output = Self;
fn rem(self, modulus: &Self) -> Self {
self.div_rem(modulus).1
}
}
impl<F: From<u64> + Zeroize + PrimeField> Poly<F> {
/// Evaluate this polynomial with the specified x/y values.
///
/// Panics on polynomials with terms whose powers exceed 2^64.
#[must_use]
pub fn eval(&self, x: F, y: F) -> F {
let mut res = self.zero_coefficient;
for (pow, coeff) in
self.y_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
{
res += y.pow([pow]) * coeff;
}
for (y_pow, coeffs) in
self.yx_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
{
let y_pow = y.pow([y_pow]);
for (x_pow, coeff) in
coeffs.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
{
res += y_pow * x.pow([x_pow]) * coeff;
}
}
for (pow, coeff) in
self.x_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v))
{
res += x.pow([pow]) * coeff;
}
res
}
/// Differentiate a polynomial, reduced by a modulus with a leading y term y^2 x^0, by x and y.
///
/// This function has undefined behavior if unreduced.
#[must_use]
pub fn differentiate(&self) -> (Poly<F>, Poly<F>) {
// Differentation by x practically involves:
// - Dropping everything without an x component
// - Shifting everything down a power of x
// - Multiplying the new coefficient by the power it prior was used with
let diff_x = {
let mut diff_x = Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: vec![],
zero_coefficient: F::ZERO,
};
if !self.x_coefficients.is_empty() {
let mut x_coeffs = self.x_coefficients.clone();
diff_x.zero_coefficient = x_coeffs.remove(0);
diff_x.x_coefficients = x_coeffs;
let mut prior_x_power = F::from(2);
for x_coeff in &mut diff_x.x_coefficients {
*x_coeff *= prior_x_power;
prior_x_power += F::ONE;
}
}
if !self.yx_coefficients.is_empty() {
let mut yx_coeffs = self.yx_coefficients[0].clone();
if !yx_coeffs.is_empty() {
diff_x.y_coefficients = vec![yx_coeffs.remove(0)];
diff_x.yx_coefficients = vec![yx_coeffs];
let mut prior_x_power = F::from(2);
for yx_coeff in &mut diff_x.yx_coefficients[0] {
*yx_coeff *= prior_x_power;
prior_x_power += F::ONE;
}
}
}
diff_x
};
// Differentation by y is trivial
// It's the y coefficient as the zero coefficient, and the yx coefficients as the x
// coefficients
// This is thanks to any y term over y^2 being reduced out
let diff_y = Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: self.yx_coefficients.first().cloned().unwrap_or(vec![]),
zero_coefficient: self.y_coefficients.first().cloned().unwrap_or(F::ZERO),
};
(diff_x, diff_y)
}
/// Normalize the x coefficient to 1.
///
/// Panics if there is no x coefficient to normalize or if it cannot be normalized to 1.
#[must_use]
pub fn normalize_x_coefficient(self) -> Self {
let scalar = self.x_coefficients[0].invert().unwrap();
self * scalar
}
}

View File

@@ -0,0 +1,237 @@
use rand_core::OsRng;
use group::{ff::Field, Group};
use dalek_ff_group::EdwardsPoint;
use pasta_curves::{Ep, Eq};
use crate::{DivisorCurve, Poly, new_divisor};
mod poly;
// Equation 4 in the security proofs
fn check_divisor<C: DivisorCurve>(points: Vec<C>) {
// Create the divisor
let divisor = new_divisor::<C>(&points).unwrap();
let eval = |c| {
let (x, y) = C::to_xy(c).unwrap();
divisor.eval(x, y)
};
// Decide challgenges
let c0 = C::random(&mut OsRng);
let c1 = C::random(&mut OsRng);
let c2 = -(c0 + c1);
let (slope, intercept) = crate::slope_intercept::<C>(c0, c1);
let mut rhs = <C as DivisorCurve>::FieldElement::ONE;
for point in points {
let (x, y) = C::to_xy(point).unwrap();
rhs *= intercept - (y - (slope * x));
}
assert_eq!(eval(c0) * eval(c1) * eval(c2), rhs);
}
fn test_divisor<C: DivisorCurve>() {
for i in 1 ..= 255 {
println!("Test iteration {i}");
// Select points
let mut points = vec![];
for _ in 0 .. i {
points.push(C::random(&mut OsRng));
}
points.push(-points.iter().sum::<C>());
println!("Points {}", points.len());
// Perform the original check
check_divisor(points.clone());
// Create the divisor
let divisor = new_divisor::<C>(&points).unwrap();
// For a divisor interpolating 256 points, as one does when interpreting a 255-bit discrete log
// with the result of its scalar multiplication against a fixed generator, the lengths of the
// yx/x coefficients shouldn't supersede the following bounds
assert!((divisor.yx_coefficients.first().unwrap_or(&vec![]).len()) <= 126);
assert!((divisor.x_coefficients.len() - 1) <= 127);
assert!(
(1 + divisor.yx_coefficients.first().unwrap_or(&vec![]).len() +
(divisor.x_coefficients.len() - 1) +
1) <=
255
);
// Decide challgenges
let c0 = C::random(&mut OsRng);
let c1 = C::random(&mut OsRng);
let c2 = -(c0 + c1);
let (slope, intercept) = crate::slope_intercept::<C>(c0, c1);
// Perform the Logarithmic derivative check
{
let dx_over_dz = {
let dx = Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: vec![C::FieldElement::ZERO, C::FieldElement::from(3)],
zero_coefficient: C::a(),
};
let dy = Poly {
y_coefficients: vec![C::FieldElement::from(2)],
yx_coefficients: vec![],
x_coefficients: vec![],
zero_coefficient: C::FieldElement::ZERO,
};
let dz = (dy.clone() * -slope) + &dx;
// We want dx/dz, and dz/dx is equal to dy/dx - slope
// Sagemath claims this, dy / dz, is the proper inverse
(dy, dz)
};
{
let sanity_eval = |c| {
let (x, y) = C::to_xy(c).unwrap();
dx_over_dz.0.eval(x, y) * dx_over_dz.1.eval(x, y).invert().unwrap()
};
let sanity = sanity_eval(c0) + sanity_eval(c1) + sanity_eval(c2);
// This verifies the dx/dz polynomial is correct
assert_eq!(sanity, C::FieldElement::ZERO);
}
// Logarithmic derivative check
let test = |divisor: Poly<_>| {
let (dx, dy) = divisor.differentiate();
let lhs = |c| {
let (x, y) = C::to_xy(c).unwrap();
let n_0 = (C::FieldElement::from(3) * (x * x)) + C::a();
let d_0 = (C::FieldElement::from(2) * y).invert().unwrap();
let p_0_n_0 = n_0 * d_0;
let n_1 = dy.eval(x, y);
let first = p_0_n_0 * n_1;
let second = dx.eval(x, y);
let d_1 = divisor.eval(x, y);
let fraction_1_n = first + second;
let fraction_1_d = d_1;
let fraction_2_n = dx_over_dz.0.eval(x, y);
let fraction_2_d = dx_over_dz.1.eval(x, y);
fraction_1_n * fraction_2_n * (fraction_1_d * fraction_2_d).invert().unwrap()
};
let lhs = lhs(c0) + lhs(c1) + lhs(c2);
let mut rhs = C::FieldElement::ZERO;
for point in &points {
let (x, y) = <C as DivisorCurve>::to_xy(*point).unwrap();
rhs += (intercept - (y - (slope * x))).invert().unwrap();
}
assert_eq!(lhs, rhs);
};
// Test the divisor and the divisor with a normalized x coefficient
test(divisor.clone());
test(divisor.normalize_x_coefficient());
}
}
}
fn test_same_point<C: DivisorCurve>() {
let mut points = vec![C::random(&mut OsRng)];
points.push(points[0]);
points.push(-points.iter().sum::<C>());
check_divisor(points);
}
fn test_subset_sum_to_infinity<C: DivisorCurve>() {
// Internally, a binary tree algorithm is used
// This executes the first pass to end up with [0, 0] for further reductions
{
let mut points = vec![C::random(&mut OsRng)];
points.push(-points[0]);
let next = C::random(&mut OsRng);
points.push(next);
points.push(-next);
check_divisor(points);
}
// This executes the first pass to end up with [0, X, -X, 0]
{
let mut points = vec![C::random(&mut OsRng)];
points.push(-points[0]);
let x_1 = C::random(&mut OsRng);
let x_2 = C::random(&mut OsRng);
points.push(x_1);
points.push(x_2);
points.push(-x_1);
points.push(-x_2);
let next = C::random(&mut OsRng);
points.push(next);
points.push(-next);
check_divisor(points);
}
}
#[test]
fn test_divisor_pallas() {
test_same_point::<Ep>();
test_subset_sum_to_infinity::<Ep>();
test_divisor::<Ep>();
}
#[test]
fn test_divisor_vesta() {
test_same_point::<Eq>();
test_subset_sum_to_infinity::<Eq>();
test_divisor::<Eq>();
}
#[test]
fn test_divisor_ed25519() {
// Since we're implementing Wei25519 ourselves, check the isomorphism works as expected
{
let incomplete_add = |p1, p2| {
let (x1, y1) = EdwardsPoint::to_xy(p1).unwrap();
let (x2, y2) = EdwardsPoint::to_xy(p2).unwrap();
// mmadd-1998-cmo
let u = y2 - y1;
let uu = u * u;
let v = x2 - x1;
let vv = v * v;
let vvv = v * vv;
let R = vv * x1;
let A = uu - vvv - R.double();
let x3 = v * A;
let y3 = (u * (R - A)) - (vvv * y1);
let z3 = vvv;
// Normalize from XYZ to XY
let x3 = x3 * z3.invert().unwrap();
let y3 = y3 * z3.invert().unwrap();
// Edwards addition -> Wei25519 coordinates should be equivalent to Wei25519 addition
assert_eq!(EdwardsPoint::to_xy(p1 + p2).unwrap(), (x3, y3));
};
for _ in 0 .. 256 {
incomplete_add(EdwardsPoint::random(&mut OsRng), EdwardsPoint::random(&mut OsRng));
}
}
test_same_point::<EdwardsPoint>();
test_subset_sum_to_infinity::<EdwardsPoint>();
test_divisor::<EdwardsPoint>();
}

View File

@@ -0,0 +1,148 @@
use rand_core::OsRng;
use group::ff::Field;
use pasta_curves::Ep;
use crate::{DivisorCurve, Poly};
type F = <Ep as DivisorCurve>::FieldElement;
#[test]
fn test_poly() {
let zero = F::ZERO;
let one = F::ONE;
{
let mut poly = Poly::zero();
poly.y_coefficients = vec![zero, one];
let mut modulus = Poly::zero();
modulus.y_coefficients = vec![one];
assert_eq!(
poly.clone().div_rem(&modulus).0,
Poly {
y_coefficients: vec![one],
yx_coefficients: vec![],
x_coefficients: vec![],
zero_coefficient: zero
}
);
assert_eq!(
poly % &modulus,
Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: vec![],
zero_coefficient: zero
}
);
}
{
let mut poly = Poly::zero();
poly.y_coefficients = vec![zero, one];
let mut squared = Poly::zero();
squared.y_coefficients = vec![zero, zero, zero, one];
assert_eq!(poly.clone() * &poly, squared);
}
{
let mut a = Poly::zero();
a.zero_coefficient = F::from(2u64);
let mut b = Poly::zero();
b.zero_coefficient = F::from(3u64);
let mut res = Poly::zero();
res.zero_coefficient = F::from(6u64);
assert_eq!(a.clone() * &b, res);
b.y_coefficients = vec![F::from(4u64)];
res.y_coefficients = vec![F::from(8u64)];
assert_eq!(a.clone() * &b, res);
assert_eq!(b.clone() * &a, res);
a.x_coefficients = vec![F::from(5u64)];
res.x_coefficients = vec![F::from(15u64)];
res.yx_coefficients = vec![vec![F::from(20u64)]];
assert_eq!(a.clone() * &b, res);
assert_eq!(b * &a, res);
// res is now 20xy + 8*y + 15*x + 6
// res ** 2 =
// 400*x^2*y^2 + 320*x*y^2 + 64*y^2 + 600*x^2*y + 480*x*y + 96*y + 225*x^2 + 180*x + 36
let mut squared = Poly::zero();
squared.y_coefficients = vec![F::from(96u64), F::from(64u64)];
squared.yx_coefficients =
vec![vec![F::from(480u64), F::from(600u64)], vec![F::from(320u64), F::from(400u64)]];
squared.x_coefficients = vec![F::from(180u64), F::from(225u64)];
squared.zero_coefficient = F::from(36u64);
assert_eq!(res.clone() * &res, squared);
}
}
#[test]
fn test_differentation() {
let random = || F::random(&mut OsRng);
let input = Poly {
y_coefficients: vec![random()],
yx_coefficients: vec![vec![random()]],
x_coefficients: vec![random(), random(), random()],
zero_coefficient: random(),
};
let (diff_x, diff_y) = input.differentiate();
assert_eq!(
diff_x,
Poly {
y_coefficients: vec![input.yx_coefficients[0][0]],
yx_coefficients: vec![],
x_coefficients: vec![
F::from(2) * input.x_coefficients[1],
F::from(3) * input.x_coefficients[2]
],
zero_coefficient: input.x_coefficients[0],
}
);
assert_eq!(
diff_y,
Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: vec![input.yx_coefficients[0][0]],
zero_coefficient: input.y_coefficients[0],
}
);
let input = Poly {
y_coefficients: vec![random()],
yx_coefficients: vec![vec![random(), random()]],
x_coefficients: vec![random(), random(), random(), random()],
zero_coefficient: random(),
};
let (diff_x, diff_y) = input.differentiate();
assert_eq!(
diff_x,
Poly {
y_coefficients: vec![input.yx_coefficients[0][0]],
yx_coefficients: vec![vec![F::from(2) * input.yx_coefficients[0][1]]],
x_coefficients: vec![
F::from(2) * input.x_coefficients[1],
F::from(3) * input.x_coefficients[2],
F::from(4) * input.x_coefficients[3],
],
zero_coefficient: input.x_coefficients[0],
}
);
assert_eq!(
diff_y,
Poly {
y_coefficients: vec![],
yx_coefficients: vec![],
x_coefficients: vec![input.yx_coefficients[0][0], input.yx_coefficients[0][1]],
zero_coefficient: input.y_coefficients[0],
}
);
}

View File

@@ -0,0 +1,21 @@
[package]
name = "generalized-bulletproofs-ec-gadgets"
version = "0.1.0"
description = "Gadgets for working with an embedded Elliptic Curve in a Generalized Bulletproofs circuit"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/ec-gadgets"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["bulletproofs", "circuit", "divisors"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
generic-array = { version = "1", default-features = false, features = ["alloc"] }
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
generalized-bulletproofs-circuit-abstraction = { path = "../circuit-abstraction" }

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,3 @@
# Generalized Bulletproofs Circuit Abstraction
A circuit abstraction around `generalized-bulletproofs`.

View File

@@ -0,0 +1,529 @@
use core::fmt;
use ciphersuite::{
group::ff::{Field, PrimeField, BatchInverter},
Ciphersuite,
};
use generalized_bulletproofs_circuit_abstraction::*;
use crate::*;
/// Parameters for a discrete logarithm proof.
///
/// This isn't required to be implemented by the Field/Group/Ciphersuite, solely a struct, to
/// enable parameterization of discrete log proofs to the bitlength of the discrete logarithm.
/// While that may be F::NUM_BITS, a discrete log proof a for a full scalar, it could also be 64,
/// a discrete log proof for a u64 (such as if opening a Pedersen commitment in-circuit).
pub trait DiscreteLogParameters {
/// The amount of bits used to represent a scalar.
type ScalarBits: ArrayLength;
/// The amount of x**i coefficients in a divisor.
///
/// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) divided
/// by two.
type XCoefficients: ArrayLength;
/// The amount of x**i coefficients in a divisor, minus one.
type XCoefficientsMinusOne: ArrayLength;
/// The amount of y x**i coefficients in a divisor.
///
/// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) plus
/// one, divided by two, minus two.
type YxCoefficients: ArrayLength;
}
/// A tabled generator for proving/verifying discrete logarithm claims.
#[derive(Clone)]
pub struct GeneratorTable<F: PrimeField, Parameters: DiscreteLogParameters>(
GenericArray<(F, F), Parameters::ScalarBits>,
);
impl<F: PrimeField, Parameters: DiscreteLogParameters> fmt::Debug
for GeneratorTable<F, Parameters>
{
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("GeneratorTable")
.field("x", &self.0[0].0)
.field("y", &self.0[0].1)
.finish_non_exhaustive()
}
}
impl<F: PrimeField, Parameters: DiscreteLogParameters> GeneratorTable<F, Parameters> {
/// Create a new table for this generator.
///
/// The generator is assumed to be well-formed and on-curve. This function may panic if it's not.
pub fn new(curve: &CurveSpec<F>, generator_x: F, generator_y: F) -> Self {
// mdbl-2007-bl
fn dbl<F: PrimeField>(a: F, x1: F, y1: F) -> (F, F) {
let xx = x1 * x1;
let w = a + (xx + xx.double());
let y1y1 = y1 * y1;
let r = y1y1 + y1y1;
let sss = (y1 * r).double().double();
let rr = r * r;
let b = x1 + r;
let b = (b * b) - xx - rr;
let h = (w * w) - b.double();
let x3 = h.double() * y1;
let y3 = (w * (b - h)) - rr.double();
let z3 = sss;
// Normalize from XYZ to XY
let z3_inv = z3.invert().unwrap();
let x3 = x3 * z3_inv;
let y3 = y3 * z3_inv;
(x3, y3)
}
let mut res = Self(GenericArray::default());
res.0[0] = (generator_x, generator_y);
for i in 1 .. Parameters::ScalarBits::USIZE {
let last = res.0[i - 1];
res.0[i] = dbl(curve.a, last.0, last.1);
}
res
}
}
/// A representation of the divisor.
///
/// The coefficient for x**1 is explicitly excluded as it's expected to be normalized to 1.
#[derive(Clone)]
pub struct Divisor<Parameters: DiscreteLogParameters> {
/// The coefficient for the `y` term of the divisor.
///
/// There is never more than one `y**i x**0` coefficient as the leading term of the modulus is
/// `y**2`. It's assumed the coefficient is non-zero (and present) as it will be for any divisor
/// exceeding trivial complexity.
pub y: Variable,
/// The coefficients for the `y**1 x**i` terms of the polynomial.
// This subtraction enforces the divisor to have at least 4 points which is acceptable.
// TODO: Double check these constants
pub yx: GenericArray<Variable, Parameters::YxCoefficients>,
/// The coefficients for the `x**i` terms of the polynomial, skipping x**1.
///
/// x**1 is skipped as it's expected to be normalized to 1, and therefore constant, in order to
/// ensure the divisor is non-zero (as necessary for the proof to be complete).
// Subtract 1 from the length due to skipping the coefficient for x**1
pub x_from_power_of_2: GenericArray<Variable, Parameters::XCoefficientsMinusOne>,
/// The constant term in the polynomial (alternatively, the coefficient for y**0 x**0).
pub zero: Variable,
}
/// A point, its discrete logarithm, and the divisor to prove it.
#[derive(Clone)]
pub struct PointWithDlog<Parameters: DiscreteLogParameters> {
/// The point which is supposedly the result of scaling the generator by the discrete logarithm.
pub point: (Variable, Variable),
/// The discrete logarithm, represented as coefficients of a polynomial of 2**i.
pub dlog: GenericArray<Variable, Parameters::ScalarBits>,
/// The divisor interpolating the relevant doublings of generator with the inverse of the point.
pub divisor: Divisor<Parameters>,
}
/// A struct containing a point used for the evaluation of a divisor.
///
/// Preprocesses and caches as much of the calculation as possible to minimize work upon reuse of
/// challenge points.
struct ChallengePoint<F: PrimeField, Parameters: DiscreteLogParameters> {
y: F,
yx: GenericArray<F, Parameters::YxCoefficients>,
x: GenericArray<F, Parameters::XCoefficients>,
p_0_n_0: F,
x_p_0_n_0: GenericArray<F, Parameters::YxCoefficients>,
p_1_n: F,
p_1_d: F,
}
impl<F: PrimeField, Parameters: DiscreteLogParameters> ChallengePoint<F, Parameters> {
fn new(
curve: &CurveSpec<F>,
// The slope between all of the challenge points
slope: F,
// The x and y coordinates
x: F,
y: F,
// The inversion of twice the y coordinate
// We accept this as an argument so that the caller can calculcate these with a batch inversion
inv_two_y: F,
) -> Self {
// Powers of x, skipping x**0
let divisor_x_len = Parameters::XCoefficients::USIZE;
let mut x_pows = GenericArray::default();
x_pows[0] = x;
for i in 1 .. divisor_x_len {
let last = x_pows[i - 1];
x_pows[i] = last * x;
}
// Powers of x multiplied by y
let divisor_yx_len = Parameters::YxCoefficients::USIZE;
let mut yx = GenericArray::default();
// Skips x**0
yx[0] = y * x;
for i in 1 .. divisor_yx_len {
let last = yx[i - 1];
yx[i] = last * x;
}
let x_sq = x.square();
let three_x_sq = x_sq.double() + x_sq;
let three_x_sq_plus_a = three_x_sq + curve.a;
let two_y = y.double();
// p_0_n_0 from `DivisorChallenge`
let p_0_n_0 = three_x_sq_plus_a * inv_two_y;
let mut x_p_0_n_0 = GenericArray::default();
// Since this iterates over x, which skips x**0, this also skips p_0_n_0 x**0
for (i, x) in x_pows.iter().take(divisor_yx_len).enumerate() {
x_p_0_n_0[i] = p_0_n_0 * x;
}
// p_1_n from `DivisorChallenge`
let p_1_n = two_y;
// p_1_d from `DivisorChallenge`
let p_1_d = (-slope * p_1_n) + three_x_sq_plus_a;
ChallengePoint { x: x_pows, y, yx, p_0_n_0, x_p_0_n_0, p_1_n, p_1_d }
}
}
// `DivisorChallenge` from the section `Discrete Log Proof`
fn divisor_challenge_eval<C: Ciphersuite, Parameters: DiscreteLogParameters>(
circuit: &mut Circuit<C>,
divisor: &Divisor<Parameters>,
challenge: &ChallengePoint<C::F, Parameters>,
) -> Variable {
// The evaluation of the divisor differentiated by y, further multiplied by p_0_n_0
// Differentation drops everything without a y coefficient, and drops what remains by a power
// of y
// (y**1 -> y**0, yx**i -> x**i)
// This aligns with p_0_n_1 from `DivisorChallenge`
let p_0_n_1 = {
let mut p_0_n_1 = LinComb::empty().term(challenge.p_0_n_0, divisor.y);
for (j, var) in divisor.yx.iter().enumerate() {
// This does not raise by `j + 1` as x_p_0_n_0 omits x**0
p_0_n_1 = p_0_n_1.term(challenge.x_p_0_n_0[j], *var);
}
p_0_n_1
};
// The evaluation of the divisor differentiated by x
// This aligns with p_0_n_2 from `DivisorChallenge`
let p_0_n_2 = {
// The coefficient for x**1 is 1, so 1 becomes the new zero coefficient
let mut p_0_n_2 = LinComb::empty().constant(C::F::ONE);
// Handle the new y coefficient
p_0_n_2 = p_0_n_2.term(challenge.y, divisor.yx[0]);
// Handle the new yx coefficients
for (j, yx) in divisor.yx.iter().enumerate().skip(1) {
// For the power which was shifted down, we multiply this coefficient
// 3 x**2 -> 2 * 3 x**1
let original_power_of_x = C::F::from(u64::try_from(j + 1).unwrap());
// `j - 1` so `j = 1` indexes yx[0] as yx[0] is the y x**1
// (yx omits y x**0)
let this_weight = original_power_of_x * challenge.yx[j - 1];
p_0_n_2 = p_0_n_2.term(this_weight, *yx);
}
// Handle the x coefficients
// We don't skip the first one as `x_from_power_of_2` already omits x**1
for (i, x) in divisor.x_from_power_of_2.iter().enumerate() {
// i + 2 as the paper expects i to start from 1 and be + 1, yet we start from 0
let original_power_of_x = C::F::from(u64::try_from(i + 2).unwrap());
// Still x[i] as x[0] is x**1
let this_weight = original_power_of_x * challenge.x[i];
p_0_n_2 = p_0_n_2.term(this_weight, *x);
}
p_0_n_2
};
// p_0_n from `DivisorChallenge`
let p_0_n = p_0_n_1 + &p_0_n_2;
// Evaluation of the divisor
// p_0_d from `DivisorChallenge`
let p_0_d = {
let mut p_0_d = LinComb::empty().term(challenge.y, divisor.y);
for (var, c_yx) in divisor.yx.iter().zip(&challenge.yx) {
p_0_d = p_0_d.term(*c_yx, *var);
}
for (i, var) in divisor.x_from_power_of_2.iter().enumerate() {
// This `i+1` is preserved, despite most not being as x omits x**0, as this assumes we
// start with `i=1`
p_0_d = p_0_d.term(challenge.x[i + 1], *var);
}
// Adding x effectively adds a `1 x` term, ensuring the divisor isn't 0
p_0_d.term(C::F::ONE, divisor.zero).constant(challenge.x[0])
};
// Calculate the joint numerator
// p_n from `DivisorChallenge`
let p_n = p_0_n * challenge.p_1_n;
// Calculate the joint denominator
// p_d from `DivisorChallenge`
let p_d = p_0_d * challenge.p_1_d;
// We want `n / d = o`
// `n / d = o` == `n = d * o`
// These are safe unwraps as they're solely done by the prover and should always be non-zero
let witness =
circuit.eval(&p_d).map(|p_d| (p_d, circuit.eval(&p_n).unwrap() * p_d.invert().unwrap()));
let (_l, o, n_claim) = circuit.mul(Some(p_d), None, witness);
circuit.equality(p_n, &n_claim.into());
o
}
/// A challenge to evaluate divisors with.
///
/// This challenge must be sampled after writing the commitments to the transcript. This challenge
/// is reusable across various divisors.
pub struct DiscreteLogChallenge<F: PrimeField, Parameters: DiscreteLogParameters> {
c0: ChallengePoint<F, Parameters>,
c1: ChallengePoint<F, Parameters>,
c2: ChallengePoint<F, Parameters>,
slope: F,
intercept: F,
}
/// A generator which has been challenged and is ready for use in evaluating discrete logarithm
/// claims.
pub struct ChallengedGenerator<F: PrimeField, Parameters: DiscreteLogParameters>(
GenericArray<F, Parameters::ScalarBits>,
);
/// Gadgets for proving the discrete logarithm of points on an elliptic curve defined over the
/// scalar field of the curve of the Bulletproof.
pub trait EcDlogGadgets<C: Ciphersuite> {
/// Sample a challenge for a series of discrete logarithm claims.
///
/// This must be called after writing the commitments to the transcript.
///
/// The generators are assumed to be non-empty. They are not transcripted. If your generators are
/// dynamic, they must be properly transcripted into the context.
///
/// May panic/have undefined behavior if an assumption is broken.
#[allow(clippy::type_complexity)]
fn discrete_log_challenge<T: Transcript, Parameters: DiscreteLogParameters>(
&self,
transcript: &mut T,
curve: &CurveSpec<C::F>,
generators: &[GeneratorTable<C::F, Parameters>],
) -> (DiscreteLogChallenge<C::F, Parameters>, Vec<ChallengedGenerator<C::F, Parameters>>);
/// Prove this point has the specified discrete logarithm over the specified generator.
///
/// The discrete logarithm is not validated to be in a canonical form. The only guarantee made on
/// it is that it's a consistent representation of _a_ discrete logarithm (reuse won't enable
/// re-interpretation as a distinct discrete logarithm).
///
/// This does ensure the point is on-curve.
///
/// This MUST only be called with `Variable`s present within commitments.
///
/// May panic/have undefined behavior if an assumption is broken, or if passed an invalid
/// witness.
fn discrete_log<Parameters: DiscreteLogParameters>(
&mut self,
curve: &CurveSpec<C::F>,
point: PointWithDlog<Parameters>,
challenge: &DiscreteLogChallenge<C::F, Parameters>,
challenged_generator: &ChallengedGenerator<C::F, Parameters>,
) -> OnCurve;
}
impl<C: Ciphersuite> EcDlogGadgets<C> for Circuit<C> {
// This is part of `DiscreteLog` from `Discrete Log Proof`, specifically, the challenges and
// the calculations dependent solely on them
fn discrete_log_challenge<T: Transcript, Parameters: DiscreteLogParameters>(
&self,
transcript: &mut T,
curve: &CurveSpec<C::F>,
generators: &[GeneratorTable<C::F, Parameters>],
) -> (DiscreteLogChallenge<C::F, Parameters>, Vec<ChallengedGenerator<C::F, Parameters>>) {
// Get the challenge points
// TODO: Implement a proper hash to curve
let (c0_x, c0_y) = loop {
let c0_x: C::F = transcript.challenge();
let Some(c0_y) =
Option::<C::F>::from(((c0_x.square() * c0_x) + (curve.a * c0_x) + curve.b).sqrt())
else {
continue;
};
// Takes the even y coordinate as to not be dependent on whatever root the above sqrt
// happens to returns
// TODO: Randomly select which to take
break (c0_x, if bool::from(c0_y.is_odd()) { -c0_y } else { c0_y });
};
let (c1_x, c1_y) = loop {
let c1_x: C::F = transcript.challenge();
let Some(c1_y) =
Option::<C::F>::from(((c1_x.square() * c1_x) + (curve.a * c1_x) + curve.b).sqrt())
else {
continue;
};
break (c1_x, if bool::from(c1_y.is_odd()) { -c1_y } else { c1_y });
};
// mmadd-1998-cmo
fn incomplete_add<F: PrimeField>(x1: F, y1: F, x2: F, y2: F) -> Option<(F, F)> {
if x1 == x2 {
None?
}
let u = y2 - y1;
let uu = u * u;
let v = x2 - x1;
let vv = v * v;
let vvv = v * vv;
let r = vv * x1;
let a = uu - vvv - r.double();
let x3 = v * a;
let y3 = (u * (r - a)) - (vvv * y1);
let z3 = vvv;
// Normalize from XYZ to XY
let z3_inv = Option::<F>::from(z3.invert())?;
let x3 = x3 * z3_inv;
let y3 = y3 * z3_inv;
Some((x3, y3))
}
let (c2_x, c2_y) = incomplete_add::<C::F>(c0_x, c0_y, c1_x, c1_y)
.expect("randomly selected points shared an x coordinate");
// We want C0, C1, C2 = -(C0 + C1)
let c2_y = -c2_y;
// Calculate the slope and intercept
// Safe invert as these x coordinates must be distinct due to passing the above incomplete_add
let slope = (c1_y - c0_y) * (c1_x - c0_x).invert().unwrap();
let intercept = c0_y - (slope * c0_x);
// Calculate the inversions for 2 c_y (for each c) and all of the challenged generators
let mut inversions = vec![C::F::ZERO; 3 + (generators.len() * Parameters::ScalarBits::USIZE)];
// Needed for the left-hand side eval
{
inversions[0] = c0_y.double();
inversions[1] = c1_y.double();
inversions[2] = c2_y.double();
}
// Perform the inversions for the generators
for (i, generator) in generators.iter().enumerate() {
// Needed for the right-hand side eval
for (j, generator) in generator.0.iter().enumerate() {
// `DiscreteLog` has weights of `(mu - (G_i.y + (slope * G_i.x)))**-1` in its last line
inversions[3 + (i * Parameters::ScalarBits::USIZE) + j] =
intercept - (generator.1 - (slope * generator.0));
}
}
for challenge_inversion in &inversions {
// This should be unreachable barring negligible probability
if challenge_inversion.is_zero().into() {
panic!("trying to invert 0");
}
}
let mut scratch = vec![C::F::ZERO; inversions.len()];
let _ = BatchInverter::invert_with_external_scratch(&mut inversions, &mut scratch);
let mut inversions = inversions.into_iter();
let inv_c0_two_y = inversions.next().unwrap();
let inv_c1_two_y = inversions.next().unwrap();
let inv_c2_two_y = inversions.next().unwrap();
let c0 = ChallengePoint::new(curve, slope, c0_x, c0_y, inv_c0_two_y);
let c1 = ChallengePoint::new(curve, slope, c1_x, c1_y, inv_c1_two_y);
let c2 = ChallengePoint::new(curve, slope, c2_x, c2_y, inv_c2_two_y);
// Fill in the inverted values
let mut challenged_generators = Vec::with_capacity(generators.len());
for _ in 0 .. generators.len() {
let mut challenged_generator = GenericArray::default();
for i in 0 .. Parameters::ScalarBits::USIZE {
challenged_generator[i] = inversions.next().unwrap();
}
challenged_generators.push(ChallengedGenerator(challenged_generator));
}
(DiscreteLogChallenge { c0, c1, c2, slope, intercept }, challenged_generators)
}
// `DiscreteLog` from `Discrete Log Proof`
fn discrete_log<Parameters: DiscreteLogParameters>(
&mut self,
curve: &CurveSpec<C::F>,
point: PointWithDlog<Parameters>,
challenge: &DiscreteLogChallenge<C::F, Parameters>,
challenged_generator: &ChallengedGenerator<C::F, Parameters>,
) -> OnCurve {
let PointWithDlog { divisor, dlog, point } = point;
// Ensure this is being safely called
let arg_iter = [point.0, point.1, divisor.y, divisor.zero];
let arg_iter = arg_iter.iter().chain(divisor.yx.iter());
let arg_iter = arg_iter.chain(divisor.x_from_power_of_2.iter());
let arg_iter = arg_iter.chain(dlog.iter());
for variable in arg_iter {
debug_assert!(
matches!(variable, Variable::CG { .. } | Variable::CH { .. } | Variable::V(_)),
"discrete log proofs requires all arguments belong to commitments",
);
}
// Check the point is on curve
let point = self.on_curve(curve, point);
// The challenge has already been sampled so those lines aren't necessary
// lhs from the paper, evaluating the divisor
let lhs_eval = LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c0)) +
&LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c1)) +
&LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c2));
// Interpolate the doublings of the generator
let mut rhs_eval = LinComb::empty();
// We call this `bit` yet it's not constrained to being a bit
// It's presumed to be yet may be malleated
for (bit, weight) in dlog.into_iter().zip(&challenged_generator.0) {
rhs_eval = rhs_eval.term(*weight, bit);
}
// Interpolate the output point
// intercept - (y - (slope * x))
// intercept - y + (slope * x)
// -y + (slope * x) + intercept
// EXCEPT the output point we're proving the discrete log for isn't the one interpolated
// Its negative is, so -y becomes y
// y + (slope * x) + intercept
let output_interpolation = LinComb::empty()
.constant(challenge.intercept)
.term(C::F::ONE, point.y)
.term(challenge.slope, point.x);
let output_interpolation_eval = self.eval(&output_interpolation);
let (_output_interpolation, inverse) =
self.inverse(Some(output_interpolation), output_interpolation_eval);
rhs_eval = rhs_eval.term(C::F::ONE, inverse);
self.equality(lhs_eval, &rhs_eval);
point
}
}

View File

@@ -0,0 +1,130 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![allow(non_snake_case)]
use generic_array::{typenum::Unsigned, ArrayLength, GenericArray};
use ciphersuite::{group::ff::Field, Ciphersuite};
use generalized_bulletproofs_circuit_abstraction::*;
mod dlog;
pub use dlog::*;
/// The specification of a short Weierstrass curve over the field `F`.
///
/// The short Weierstrass curve is defined via the formula `y**2 = x**3 + a*x + b`.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct CurveSpec<F> {
/// The `a` constant in the curve formula.
pub a: F,
/// The `b` constant in the curve formula.
pub b: F,
}
/// A struct for a point on a towered curve which has been confirmed to be on-curve.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct OnCurve {
pub(crate) x: Variable,
pub(crate) y: Variable,
}
impl OnCurve {
/// The variable for the x-coordinate.
pub fn x(&self) -> Variable {
self.x
}
/// The variable for the y-coordinate.
pub fn y(&self) -> Variable {
self.y
}
}
/// Gadgets for working with points on an elliptic curve defined over the scalar field of the curve
/// of the Bulletproof.
pub trait EcGadgets<C: Ciphersuite> {
/// Constrain an x and y coordinate as being on the specified curve.
///
/// The specified curve is defined over the scalar field of the curve this proof is performed
/// over, offering efficient arithmetic.
///
/// May panic if the prover and the point is not actually on-curve.
fn on_curve(&mut self, curve: &CurveSpec<C::F>, point: (Variable, Variable)) -> OnCurve;
/// Perform incomplete addition for a fixed point and an on-curve point.
///
/// `a` is the x and y coordinates of the fixed point, assumed to be on-curve.
///
/// `b` is a point prior checked to be on-curve.
///
/// `c` is a point prior checked to be on-curve, constrained to be the sum of `a` and `b`.
///
/// `a` and `b` are checked to have distinct x coordinates.
///
/// This function may panic if `a` is malformed or if the prover and `c` is not actually the sum
/// of `a` and `b`.
fn incomplete_add_fixed(&mut self, a: (C::F, C::F), b: OnCurve, c: OnCurve) -> OnCurve;
}
impl<C: Ciphersuite> EcGadgets<C> for Circuit<C> {
fn on_curve(&mut self, curve: &CurveSpec<C::F>, (x, y): (Variable, Variable)) -> OnCurve {
let x_eval = self.eval(&LinComb::from(x));
let (_x, _x_2, x2) =
self.mul(Some(LinComb::from(x)), Some(LinComb::from(x)), x_eval.map(|x| (x, x)));
let (_x, _x_2, x3) =
self.mul(Some(LinComb::from(x2)), Some(LinComb::from(x)), x_eval.map(|x| (x * x, x)));
let expected_y2 = LinComb::from(x3).term(curve.a, x).constant(curve.b);
let y_eval = self.eval(&LinComb::from(y));
let (_y, _y_2, y2) =
self.mul(Some(LinComb::from(y)), Some(LinComb::from(y)), y_eval.map(|y| (y, y)));
self.equality(y2.into(), &expected_y2);
OnCurve { x, y }
}
fn incomplete_add_fixed(&mut self, a: (C::F, C::F), b: OnCurve, c: OnCurve) -> OnCurve {
// Check b.x != a.0
{
let bx_lincomb = LinComb::from(b.x);
let bx_eval = self.eval(&bx_lincomb);
self.inequality(bx_lincomb, &LinComb::empty().constant(a.0), bx_eval.map(|bx| (bx, a.0)));
}
let (x0, y0) = (a.0, a.1);
let (x1, y1) = (b.x, b.y);
let (x2, y2) = (c.x, c.y);
let slope_eval = self.eval(&LinComb::from(x1)).map(|x1| {
let y1 = self.eval(&LinComb::from(b.y)).unwrap();
(y1 - y0) * (x1 - x0).invert().unwrap()
});
// slope * (x1 - x0) = y1 - y0
let x1_minus_x0 = LinComb::from(x1).constant(-x0);
let x1_minus_x0_eval = self.eval(&x1_minus_x0);
let (slope, _r, o) =
self.mul(None, Some(x1_minus_x0), slope_eval.map(|slope| (slope, x1_minus_x0_eval.unwrap())));
self.equality(LinComb::from(o), &LinComb::from(y1).constant(-y0));
// slope * (x2 - x0) = -y2 - y0
let x2_minus_x0 = LinComb::from(x2).constant(-x0);
let x2_minus_x0_eval = self.eval(&x2_minus_x0);
let (_slope, _x2_minus_x0, o) = self.mul(
Some(slope.into()),
Some(x2_minus_x0),
slope_eval.map(|slope| (slope, x2_minus_x0_eval.unwrap())),
);
self.equality(o.into(), &LinComb::empty().term(-C::F::ONE, y2).constant(-y0));
// slope * slope = x0 + x1 + x2
let (_slope, _slope_2, o) =
self.mul(Some(slope.into()), Some(slope.into()), slope_eval.map(|slope| (slope, slope)));
self.equality(o.into(), &LinComb::from(x1).term(C::F::ONE, x2).constant(x0));
OnCurve { x: x2, y: y2 }
}
}

View File

@@ -0,0 +1,40 @@
[package]
name = "embedwards25519"
version = "0.1.0"
description = "A curve defined over the Ed25519 scalar field"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/embedwards25519"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["curve25519", "ed25519", "ristretto255", "group"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
rustversion = "1"
hex-literal = { version = "0.4", default-features = false }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
subtle = { version = "^2.4", default-features = false, features = ["std"] }
generic-array = { version = "1", default-features = false }
crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] }
dalek-ff-group = { path = "../../dalek-ff-group", version = "0.4", default-features = false }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
ec-divisors = { path = "../divisors" }
generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" }
[dev-dependencies]
hex = "0.4"
rand_core = { version = "0.6", features = ["std"] }
ff-group-tests = { path = "../../ff-group-tests" }

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,21 @@
# embedwards25519
A curve defined over the Ed25519 scalar field.
This curve was found via
[tevador's script](https://gist.github.com/tevador/4524c2092178df08996487d4e272b096)
for finding curves (specifically, curve cycles), modified to search for curves
whose field is the Ed25519 scalar field (not the Ed25519 field).
```
p = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed
q = 0x0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b
D = -420435
y^2 = x^3 - 3*x + 4188043517836764736459661287169077812555441231147410753119540549773825148767
```
The embedding degree is `(q-1)/2`.
This curve should not be used with single-coordinate ladders, and points should
always be represented in a compressed form (preventing receiving off-curve
points).

View File

@@ -0,0 +1,293 @@
use zeroize::Zeroize;
// Use black_box when possible
#[rustversion::since(1.66)]
use core::hint::black_box;
#[rustversion::before(1.66)]
fn black_box<T>(val: T) -> T {
val
}
pub(crate) fn u8_from_bool(bit_ref: &mut bool) -> u8 {
let bit_ref = black_box(bit_ref);
let mut bit = black_box(*bit_ref);
let res = black_box(bit as u8);
bit.zeroize();
debug_assert!((res | 1) == 1);
bit_ref.zeroize();
res
}
macro_rules! math_op {
(
$Value: ident,
$Other: ident,
$Op: ident,
$op_fn: ident,
$Assign: ident,
$assign_fn: ident,
$function: expr
) => {
impl $Op<$Other> for $Value {
type Output = $Value;
fn $op_fn(self, other: $Other) -> Self::Output {
Self($function(self.0, other.0))
}
}
impl $Assign<$Other> for $Value {
fn $assign_fn(&mut self, other: $Other) {
self.0 = $function(self.0, other.0);
}
}
impl<'a> $Op<&'a $Other> for $Value {
type Output = $Value;
fn $op_fn(self, other: &'a $Other) -> Self::Output {
Self($function(self.0, other.0))
}
}
impl<'a> $Assign<&'a $Other> for $Value {
fn $assign_fn(&mut self, other: &'a $Other) {
self.0 = $function(self.0, other.0);
}
}
};
}
macro_rules! from_wrapper {
($wrapper: ident, $inner: ident, $uint: ident) => {
impl From<$uint> for $wrapper {
fn from(a: $uint) -> $wrapper {
Self(Residue::new(&$inner::from(a)))
}
}
};
}
macro_rules! field {
(
$FieldName: ident,
$ResidueType: ident,
$MODULUS_STR: ident,
$MODULUS: ident,
$WIDE_MODULUS: ident,
$NUM_BITS: literal,
$MULTIPLICATIVE_GENERATOR: literal,
$S: literal,
$ROOT_OF_UNITY: literal,
$DELTA: literal,
) => {
use core::{
ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign},
iter::{Sum, Product},
};
use subtle::{Choice, CtOption, ConstantTimeEq, ConstantTimeLess, ConditionallySelectable};
use rand_core::RngCore;
use crypto_bigint::{Integer, NonZero, Encoding, impl_modulus};
use ciphersuite::group::ff::{
Field, PrimeField, FieldBits, PrimeFieldBits, helpers::sqrt_ratio_generic,
};
use $crate::backend::u8_from_bool;
fn reduce(x: U512) -> U256 {
U256::from_le_slice(&x.rem(&NonZero::new($WIDE_MODULUS).unwrap()).to_le_bytes()[.. 32])
}
impl ConstantTimeEq for $FieldName {
fn ct_eq(&self, other: &Self) -> Choice {
self.0.ct_eq(&other.0)
}
}
impl ConditionallySelectable for $FieldName {
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
$FieldName(Residue::conditional_select(&a.0, &b.0, choice))
}
}
math_op!($FieldName, $FieldName, Add, add, AddAssign, add_assign, |x: $ResidueType, y| x
.add(&y));
math_op!($FieldName, $FieldName, Sub, sub, SubAssign, sub_assign, |x: $ResidueType, y| x
.sub(&y));
math_op!($FieldName, $FieldName, Mul, mul, MulAssign, mul_assign, |x: $ResidueType, y| x
.mul(&y));
from_wrapper!($FieldName, U256, u8);
from_wrapper!($FieldName, U256, u16);
from_wrapper!($FieldName, U256, u32);
from_wrapper!($FieldName, U256, u64);
from_wrapper!($FieldName, U256, u128);
impl Neg for $FieldName {
type Output = $FieldName;
fn neg(self) -> $FieldName {
Self(self.0.neg())
}
}
impl<'a> Neg for &'a $FieldName {
type Output = $FieldName;
fn neg(self) -> Self::Output {
(*self).neg()
}
}
impl $FieldName {
/// Perform an exponentation.
pub fn pow(&self, other: $FieldName) -> $FieldName {
let mut table = [Self(Residue::ONE); 16];
table[1] = *self;
for i in 2 .. 16 {
table[i] = table[i - 1] * self;
}
let mut res = Self(Residue::ONE);
let mut bits = 0;
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
bits <<= 1;
let mut bit = u8_from_bool(bit.deref_mut());
bits |= bit;
bit.zeroize();
if ((i + 1) % 4) == 0 {
if i != 3 {
for _ in 0 .. 4 {
res *= res;
}
}
let mut factor = table[0];
for (j, candidate) in table[1 ..].iter().enumerate() {
let j = j + 1;
factor = Self::conditional_select(&factor, &candidate, usize::from(bits).ct_eq(&j));
}
res *= factor;
bits = 0;
}
}
res
}
}
impl Field for $FieldName {
const ZERO: Self = Self(Residue::ZERO);
const ONE: Self = Self(Residue::ONE);
fn random(mut rng: impl RngCore) -> Self {
let mut bytes = [0; 64];
rng.fill_bytes(&mut bytes);
$FieldName(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref()))))
}
fn square(&self) -> Self {
Self(self.0.square())
}
fn double(&self) -> Self {
*self + self
}
fn invert(&self) -> CtOption<Self> {
let res = self.0.invert();
CtOption::new(Self(res.0), res.1.into())
}
fn sqrt(&self) -> CtOption<Self> {
// (p + 1) // 4, as valid since p % 4 == 3
let mod_plus_one_div_four = $MODULUS.saturating_add(&U256::ONE).wrapping_div(&(4u8.into()));
let res = self.pow(Self($ResidueType::new_checked(&mod_plus_one_div_four).unwrap()));
CtOption::new(res, res.square().ct_eq(self))
}
fn sqrt_ratio(num: &Self, div: &Self) -> (Choice, Self) {
sqrt_ratio_generic(num, div)
}
}
impl PrimeField for $FieldName {
type Repr = [u8; 32];
const MODULUS: &'static str = $MODULUS_STR;
const NUM_BITS: u32 = $NUM_BITS;
const CAPACITY: u32 = $NUM_BITS - 1;
const TWO_INV: Self = $FieldName($ResidueType::new(&U256::from_u8(2)).invert().0);
const MULTIPLICATIVE_GENERATOR: Self =
Self(Residue::new(&U256::from_u8($MULTIPLICATIVE_GENERATOR)));
const S: u32 = $S;
const ROOT_OF_UNITY: Self = $FieldName(Residue::new(&U256::from_be_hex($ROOT_OF_UNITY)));
const ROOT_OF_UNITY_INV: Self = Self(Self::ROOT_OF_UNITY.0.invert().0);
const DELTA: Self = $FieldName(Residue::new(&U256::from_be_hex($DELTA)));
fn from_repr(bytes: Self::Repr) -> CtOption<Self> {
let res = U256::from_le_slice(&bytes);
CtOption::new($FieldName(Residue::new(&res)), res.ct_lt(&$MODULUS))
}
fn to_repr(&self) -> Self::Repr {
let mut repr = [0; 32];
repr.copy_from_slice(&self.0.retrieve().to_le_bytes());
repr
}
fn is_odd(&self) -> Choice {
self.0.retrieve().is_odd()
}
}
impl PrimeFieldBits for $FieldName {
type ReprBits = [u8; 32];
fn to_le_bits(&self) -> FieldBits<Self::ReprBits> {
self.to_repr().into()
}
fn char_le_bits() -> FieldBits<Self::ReprBits> {
let mut repr = [0; 32];
repr.copy_from_slice(&MODULUS.to_le_bytes());
repr.into()
}
}
impl Sum<$FieldName> for $FieldName {
fn sum<I: Iterator<Item = $FieldName>>(iter: I) -> $FieldName {
let mut res = $FieldName::ZERO;
for item in iter {
res += item;
}
res
}
}
impl<'a> Sum<&'a $FieldName> for $FieldName {
fn sum<I: Iterator<Item = &'a $FieldName>>(iter: I) -> $FieldName {
iter.cloned().sum()
}
}
impl Product<$FieldName> for $FieldName {
fn product<I: Iterator<Item = $FieldName>>(iter: I) -> $FieldName {
let mut res = $FieldName::ONE;
for item in iter {
res *= item;
}
res
}
}
impl<'a> Product<&'a $FieldName> for $FieldName {
fn product<I: Iterator<Item = &'a $FieldName>>(iter: I) -> $FieldName {
iter.cloned().product()
}
}
};
}

View File

@@ -0,0 +1,47 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2};
use ciphersuite::group::{ff::PrimeField, Group};
#[macro_use]
mod backend;
mod scalar;
pub use scalar::Scalar;
pub use dalek_ff_group::Scalar as FieldElement;
mod point;
pub use point::Point;
/// Ciphersuite for Embedwards25519.
///
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[derive(Clone, Copy, PartialEq, Eq, Debug, zeroize::Zeroize)]
pub struct Embedwards25519;
impl ciphersuite::Ciphersuite for Embedwards25519 {
type F = Scalar;
type G = Point;
type H = blake2::Blake2b512;
const ID: &'static [u8] = b"embedwards25519";
fn generator() -> Self::G {
Point::generator()
}
fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F {
use blake2::Digest;
Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap())
}
}
impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Embedwards25519 {
type ScalarBits = U<{ Scalar::NUM_BITS as usize }>;
type XCoefficients = Quot<Sum<Self::ScalarBits, U1>, U2>;
type XCoefficientsMinusOne = Diff<Self::XCoefficients, U1>;
type YxCoefficients = Diff<Quot<Sum<Sum<Self::ScalarBits, U1>, U1>, U2>, U2>;
}

View File

@@ -0,0 +1,415 @@
use core::{
ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign},
iter::Sum,
};
use rand_core::RngCore;
use zeroize::Zeroize;
use subtle::{Choice, CtOption, ConstantTimeEq, ConditionallySelectable};
use ciphersuite::group::{
ff::{Field, PrimeField, PrimeFieldBits},
Group, GroupEncoding,
prime::PrimeGroup,
};
use crate::{backend::u8_from_bool, Scalar, FieldElement};
#[allow(non_snake_case)]
fn B() -> FieldElement {
FieldElement::from_repr(hex_literal::hex!(
"5f07603a853f20370b682036210d463e64903a23ea669d07ca26cfc13f594209"
))
.unwrap()
}
fn recover_y(x: FieldElement) -> CtOption<FieldElement> {
// x**3 - 3 * x + B
((x.square() * x) - (x.double() + x) + B()).sqrt()
}
/// Point.
#[derive(Clone, Copy, Debug, Zeroize)]
#[repr(C)]
pub struct Point {
x: FieldElement, // / Z
y: FieldElement, // / Z
z: FieldElement,
}
impl ConstantTimeEq for Point {
fn ct_eq(&self, other: &Self) -> Choice {
let x1 = self.x * other.z;
let x2 = other.x * self.z;
let y1 = self.y * other.z;
let y2 = other.y * self.z;
(self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2))
}
}
impl PartialEq for Point {
fn eq(&self, other: &Point) -> bool {
self.ct_eq(other).into()
}
}
impl Eq for Point {}
impl ConditionallySelectable for Point {
fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {
Point {
x: FieldElement::conditional_select(&a.x, &b.x, choice),
y: FieldElement::conditional_select(&a.y, &b.y, choice),
z: FieldElement::conditional_select(&a.z, &b.z, choice),
}
}
}
impl Add for Point {
type Output = Point;
#[allow(non_snake_case)]
fn add(self, other: Self) -> Self {
// add-2015-rcb
let a = -FieldElement::from(3u64);
let B = B();
let b3 = B + B + B;
let X1 = self.x;
let Y1 = self.y;
let Z1 = self.z;
let X2 = other.x;
let Y2 = other.y;
let Z2 = other.z;
let t0 = X1 * X2;
let t1 = Y1 * Y2;
let t2 = Z1 * Z2;
let t3 = X1 + Y1;
let t4 = X2 + Y2;
let t3 = t3 * t4;
let t4 = t0 + t1;
let t3 = t3 - t4;
let t4 = X1 + Z1;
let t5 = X2 + Z2;
let t4 = t4 * t5;
let t5 = t0 + t2;
let t4 = t4 - t5;
let t5 = Y1 + Z1;
let X3 = Y2 + Z2;
let t5 = t5 * X3;
let X3 = t1 + t2;
let t5 = t5 - X3;
let Z3 = a * t4;
let X3 = b3 * t2;
let Z3 = X3 + Z3;
let X3 = t1 - Z3;
let Z3 = t1 + Z3;
let Y3 = X3 * Z3;
let t1 = t0 + t0;
let t1 = t1 + t0;
let t2 = a * t2;
let t4 = b3 * t4;
let t1 = t1 + t2;
let t2 = t0 - t2;
let t2 = a * t2;
let t4 = t4 + t2;
let t0 = t1 * t4;
let Y3 = Y3 + t0;
let t0 = t5 * t4;
let X3 = t3 * X3;
let X3 = X3 - t0;
let t0 = t3 * t1;
let Z3 = t5 * Z3;
let Z3 = Z3 + t0;
Point { x: X3, y: Y3, z: Z3 }
}
}
impl AddAssign for Point {
fn add_assign(&mut self, other: Point) {
*self = *self + other;
}
}
impl Add<&Point> for Point {
type Output = Point;
fn add(self, other: &Point) -> Point {
self + *other
}
}
impl AddAssign<&Point> for Point {
fn add_assign(&mut self, other: &Point) {
*self += *other;
}
}
impl Neg for Point {
type Output = Point;
fn neg(self) -> Self {
Point { x: self.x, y: -self.y, z: self.z }
}
}
impl Sub for Point {
type Output = Point;
#[allow(clippy::suspicious_arithmetic_impl)]
fn sub(self, other: Self) -> Self {
self + other.neg()
}
}
impl SubAssign for Point {
fn sub_assign(&mut self, other: Point) {
*self = *self - other;
}
}
impl Sub<&Point> for Point {
type Output = Point;
fn sub(self, other: &Point) -> Point {
self - *other
}
}
impl SubAssign<&Point> for Point {
fn sub_assign(&mut self, other: &Point) {
*self -= *other;
}
}
impl Group for Point {
type Scalar = Scalar;
fn random(mut rng: impl RngCore) -> Self {
loop {
let mut bytes = [0; 32];
rng.fill_bytes(bytes.as_mut());
let opt = Self::from_bytes(&bytes);
if opt.is_some().into() {
return opt.unwrap();
}
}
}
fn identity() -> Self {
Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO }
}
fn generator() -> Self {
Point {
x: FieldElement::from_repr(hex_literal::hex!(
"0100000000000000000000000000000000000000000000000000000000000000"
))
.unwrap(),
y: FieldElement::from_repr(hex_literal::hex!(
"2e4118080a484a3dfbafe2199a0e36b7193581d676c0dadfa376b0265616020c"
))
.unwrap(),
z: FieldElement::ONE,
}
}
fn is_identity(&self) -> Choice {
self.z.ct_eq(&FieldElement::ZERO)
}
#[allow(non_snake_case)]
fn double(&self) -> Self {
// dbl-2007-bl-2
let X1 = self.x;
let Y1 = self.y;
let Z1 = self.z;
let w = (X1 - Z1) * (X1 + Z1);
let w = w.double() + w;
let s = (Y1 * Z1).double();
let ss = s.square();
let sss = s * ss;
let R = Y1 * s;
let RR = R.square();
let B_ = (X1 * R).double();
let h = w.square() - B_.double();
let X3 = h * s;
let Y3 = w * (B_ - h) - RR.double();
let Z3 = sss;
let res = Self { x: X3, y: Y3, z: Z3 };
// If self is identity, res will not be well-formed
// Accordingly, we return self if self was the identity
Self::conditional_select(&res, self, self.is_identity())
}
}
impl Sum<Point> for Point {
fn sum<I: Iterator<Item = Point>>(iter: I) -> Point {
let mut res = Self::identity();
for i in iter {
res += i;
}
res
}
}
impl<'a> Sum<&'a Point> for Point {
fn sum<I: Iterator<Item = &'a Point>>(iter: I) -> Point {
Point::sum(iter.cloned())
}
}
impl Mul<Scalar> for Point {
type Output = Point;
fn mul(self, mut other: Scalar) -> Point {
// Precompute the optimal amount that's a multiple of 2
let mut table = [Point::identity(); 16];
table[1] = self;
for i in 2 .. 16 {
table[i] = table[i - 1] + self;
}
let mut res = Self::identity();
let mut bits = 0;
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
bits <<= 1;
let mut bit = u8_from_bool(bit.deref_mut());
bits |= bit;
bit.zeroize();
if ((i + 1) % 4) == 0 {
if i != 3 {
for _ in 0 .. 4 {
res = res.double();
}
}
let mut term = table[0];
for (j, candidate) in table[1 ..].iter().enumerate() {
let j = j + 1;
term = Self::conditional_select(&term, candidate, usize::from(bits).ct_eq(&j));
}
res += term;
bits = 0;
}
}
other.zeroize();
res
}
}
impl MulAssign<Scalar> for Point {
fn mul_assign(&mut self, other: Scalar) {
*self = *self * other;
}
}
impl Mul<&Scalar> for Point {
type Output = Point;
fn mul(self, other: &Scalar) -> Point {
self * *other
}
}
impl MulAssign<&Scalar> for Point {
fn mul_assign(&mut self, other: &Scalar) {
*self *= *other;
}
}
impl GroupEncoding for Point {
type Repr = [u8; 32];
fn from_bytes(bytes: &Self::Repr) -> CtOption<Self> {
// Extract and clear the sign bit
let mut bytes = *bytes;
let sign = Choice::from(bytes[31] >> 7);
bytes[31] &= u8::MAX >> 1;
// Parse x, recover y
FieldElement::from_repr(bytes).and_then(|x| {
let is_identity = x.is_zero();
let y = recover_y(x).map(|mut y| {
y = <_>::conditional_select(&y, &-y, y.is_odd().ct_eq(&!sign));
y
});
// If this the identity, set y to 1
let y =
CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity);
// Create the point if we have a y solution
let point = y.map(|y| Point { x, y, z: FieldElement::ONE });
let not_negative_zero = !(is_identity & sign);
// Only return the point if it isn't -0
CtOption::conditional_select(
&CtOption::new(Point::identity(), 0.into()),
&point,
not_negative_zero,
)
})
}
fn from_bytes_unchecked(bytes: &Self::Repr) -> CtOption<Self> {
Point::from_bytes(bytes)
}
fn to_bytes(&self) -> Self::Repr {
let Some(z) = Option::<FieldElement>::from(self.z.invert()) else {
return [0; 32];
};
let x = self.x * z;
let y = self.y * z;
let mut res = [0; 32];
res.as_mut().copy_from_slice(&x.to_repr());
// The following conditional select normalizes the sign to 0 when x is 0
let y_sign = u8::conditional_select(&y.is_odd().unwrap_u8(), &0, x.ct_eq(&FieldElement::ZERO));
res[31] |= y_sign << 7;
res
}
}
impl PrimeGroup for Point {}
impl ec_divisors::DivisorCurve for Point {
type FieldElement = FieldElement;
fn a() -> Self::FieldElement {
-FieldElement::from(3u64)
}
fn b() -> Self::FieldElement {
B()
}
fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> {
let z: Self::FieldElement = Option::from(point.z.invert())?;
Some((point.x * z, point.y * z))
}
}
#[test]
fn test_curve() {
ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng);
}
#[test]
fn generator() {
assert_eq!(
Point::generator(),
Point::from_bytes(&hex_literal::hex!(
"0100000000000000000000000000000000000000000000000000000000000000"
))
.unwrap()
);
}
#[test]
fn zero_x_is_invalid() {
assert!(Option::<FieldElement>::from(recover_y(FieldElement::ZERO)).is_none());
}
// Checks random won't infinitely loop
#[test]
fn random() {
Point::random(&mut rand_core::OsRng);
}

View File

@@ -0,0 +1,52 @@
use zeroize::{DefaultIsZeroes, Zeroize};
use crypto_bigint::{
U256, U512,
modular::constant_mod::{ResidueParams, Residue},
};
const MODULUS_STR: &str = "0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b";
impl_modulus!(EmbedwardsQ, U256, MODULUS_STR);
type ResidueType = Residue<EmbedwardsQ, { EmbedwardsQ::LIMBS }>;
/// The Scalar field of Embedwards25519.
///
/// This is equivalent to the field secp256k1 is defined over.
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)]
#[repr(C)]
pub struct Scalar(pub(crate) ResidueType);
impl DefaultIsZeroes for Scalar {}
pub(crate) const MODULUS: U256 = U256::from_be_hex(MODULUS_STR);
const WIDE_MODULUS: U512 = U512::from_be_hex(concat!(
"0000000000000000000000000000000000000000000000000000000000000000",
"0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b",
));
field!(
Scalar,
ResidueType,
MODULUS_STR,
MODULUS,
WIDE_MODULUS,
252,
10,
1,
"0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96a",
"0000000000000000000000000000000000000000000000000000000000000064",
);
impl Scalar {
/// Perform a wide reduction, presumably to obtain a non-biased Scalar field element.
pub fn wide_reduce(bytes: [u8; 64]) -> Scalar {
Scalar(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref()))))
}
}
#[test]
fn test_scalar_field() {
ff_group_tests::prime_field::test_prime_field_bits::<_, Scalar>(&mut rand_core::OsRng);
}

View File

@@ -0,0 +1,34 @@
[package]
name = "generalized-bulletproofs"
version = "0.1.0"
description = "Generalized Bulletproofs"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/generalized-bulletproofs"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
rand_core = { version = "0.6", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["std", "batch"] }
ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] }
[dev-dependencies]
rand_core = { version = "0.6", features = ["getrandom"] }
transcript = { package = "flexible-transcript", path = "../../transcript", features = ["recommended"] }
ciphersuite = { path = "../../ciphersuite", features = ["ristretto"] }
[features]
tests = []

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2021-2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,6 @@
# Generalized Bulletproofs
An implementation of
[Generalized Bulletproofs](https://repo.getmonero.org/monero-project/ccs-proposals/uploads/a9baa50c38c6312efc0fea5c6a188bb9/gbp.pdf),
a variant of the Bulletproofs arithmetic circuit statement to support Pedersen
vector commitments.

View File

@@ -0,0 +1,679 @@
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use multiexp::{multiexp, multiexp_vartime};
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{
ScalarVector, PointVector, ProofGenerators, PedersenCommitment, PedersenVectorCommitment,
BatchVerifier,
transcript::*,
lincomb::accumulate_vector,
inner_product::{IpError, IpStatement, IpWitness, P},
};
pub use crate::lincomb::{Variable, LinComb};
/// An Arithmetic Circuit Statement.
///
/// Bulletproofs' constraints are of the form
/// `aL * aR = aO, WL * aL + WR * aR + WO * aO = WV * V + c`.
///
/// Generalized Bulletproofs modifies this to
/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H = WV * V + c`.
///
/// We implement the latter, yet represented (for simplicity) as
/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H + WV * V + c = 0`.
#[derive(Clone, Debug)]
pub struct ArithmeticCircuitStatement<'a, C: Ciphersuite> {
generators: ProofGenerators<'a, C>,
constraints: Vec<LinComb<C::F>>,
C: PointVector<C>,
V: PointVector<C>,
}
impl<'a, C: Ciphersuite> Zeroize for ArithmeticCircuitStatement<'a, C> {
fn zeroize(&mut self) {
self.constraints.zeroize();
self.C.zeroize();
self.V.zeroize();
}
}
/// The witness for an arithmetic circuit statement.
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ArithmeticCircuitWitness<C: Ciphersuite> {
aL: ScalarVector<C::F>,
aR: ScalarVector<C::F>,
aO: ScalarVector<C::F>,
c: Vec<PedersenVectorCommitment<C>>,
v: Vec<PedersenCommitment<C>>,
}
/// An error incurred during arithmetic circuit proof operations.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum AcError {
/// The vectors of scalars which are multiplied against each other were of different lengths.
DifferingLrLengths,
/// The matrices of constraints are of different lengths.
InconsistentAmountOfConstraints,
/// A constraint referred to a non-existent term.
ConstrainedNonExistentTerm,
/// A constraint referred to a non-existent commitment.
ConstrainedNonExistentCommitment,
/// There weren't enough generators to prove for this statement.
NotEnoughGenerators,
/// The witness was inconsistent to the statement.
///
/// Sanity checks on the witness are always performed. If the library is compiled with debug
/// assertions on, the satisfaction of all constraints and validity of the commitmentsd is
/// additionally checked.
InconsistentWitness,
/// There was an error from the inner-product proof.
Ip(IpError),
/// The proof wasn't complete and the necessary values could not be read from the transcript.
IncompleteProof,
}
impl<C: Ciphersuite> ArithmeticCircuitWitness<C> {
/// Constructs a new witness instance.
pub fn new(
aL: ScalarVector<C::F>,
aR: ScalarVector<C::F>,
c: Vec<PedersenVectorCommitment<C>>,
v: Vec<PedersenCommitment<C>>,
) -> Result<Self, AcError> {
if aL.len() != aR.len() {
Err(AcError::DifferingLrLengths)?;
}
// The Pedersen Vector Commitments don't have their variables' lengths checked as they aren't
// paired off with each other as aL, aR are
// The PVC commit function ensures there's enough generators for their amount of terms
// If there aren't enough/the same generators when this is proven for, it'll trigger
// InconsistentWitness
let aO = aL.clone() * &aR;
Ok(ArithmeticCircuitWitness { aL, aR, aO, c, v })
}
}
struct YzChallenges<C: Ciphersuite> {
y_inv: ScalarVector<C::F>,
z: ScalarVector<C::F>,
}
impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> {
// The amount of multiplications performed.
fn n(&self) -> usize {
self.generators.len()
}
// The amount of constraints.
fn q(&self) -> usize {
self.constraints.len()
}
// The amount of Pedersen vector commitments.
fn c(&self) -> usize {
self.C.len()
}
// The amount of Pedersen commitments.
fn m(&self) -> usize {
self.V.len()
}
/// Create a new ArithmeticCircuitStatement for the specified relationship.
///
/// The `LinComb`s passed as `constraints` will be bound to evaluate to 0.
///
/// The constraints are not transcripted. They're expected to be deterministic from the context
/// and higher-level statement. If your constraints are variable, you MUST transcript them before
/// calling prove/verify.
///
/// The commitments are expected to have been transcripted extenally to this statement's
/// invocation. That's practically ensured by taking a `Commitments` struct here, which is only
/// obtainable via a transcript.
pub fn new(
generators: ProofGenerators<'a, C>,
constraints: Vec<LinComb<C::F>>,
commitments: Commitments<C>,
) -> Result<Self, AcError> {
let Commitments { C, V } = commitments;
for constraint in &constraints {
if Some(generators.len()) <= constraint.highest_a_index {
Err(AcError::ConstrainedNonExistentTerm)?;
}
if Some(C.len()) <= constraint.highest_c_index {
Err(AcError::ConstrainedNonExistentCommitment)?;
}
if Some(V.len()) <= constraint.highest_v_index {
Err(AcError::ConstrainedNonExistentCommitment)?;
}
}
Ok(Self { generators, constraints, C, V })
}
fn yz_challenges(&self, y: C::F, z_1: C::F) -> YzChallenges<C> {
let y_inv = y.invert().unwrap();
let y_inv = ScalarVector::powers(y_inv, self.n());
// Powers of z *starting with z**1*
// We could reuse powers and remove the first element, yet this is cheaper than the shift that
// would require
let q = self.q();
let mut z = ScalarVector(Vec::with_capacity(q));
z.0.push(z_1);
for _ in 1 .. q {
z.0.push(*z.0.last().unwrap() * z_1);
}
z.0.truncate(q);
YzChallenges { y_inv, z }
}
/// Prove for this statement/witness.
pub fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
transcript: &mut Transcript,
mut witness: ArithmeticCircuitWitness<C>,
) -> Result<(), AcError> {
let n = self.n();
let c = self.c();
let m = self.m();
// Check the witness length and pad it to the necessary power of two
if witness.aL.len() > n {
Err(AcError::NotEnoughGenerators)?;
}
while witness.aL.len() < n {
witness.aL.0.push(C::F::ZERO);
witness.aR.0.push(C::F::ZERO);
witness.aO.0.push(C::F::ZERO);
}
for c in &mut witness.c {
if c.g_values.len() > n {
Err(AcError::NotEnoughGenerators)?;
}
if c.h_values.len() > n {
Err(AcError::NotEnoughGenerators)?;
}
// The Pedersen vector commitments internally have n terms
while c.g_values.len() < n {
c.g_values.0.push(C::F::ZERO);
}
while c.h_values.len() < n {
c.h_values.0.push(C::F::ZERO);
}
}
// Check the witness's consistency with the statement
if (c != witness.c.len()) || (m != witness.v.len()) {
Err(AcError::InconsistentWitness)?;
}
#[cfg(debug_assertions)]
{
for (commitment, opening) in self.V.0.iter().zip(witness.v.iter()) {
if *commitment != opening.commit(self.generators.g(), self.generators.h()) {
Err(AcError::InconsistentWitness)?;
}
}
for (commitment, opening) in self.C.0.iter().zip(witness.c.iter()) {
if Some(*commitment) !=
opening.commit(
self.generators.g_bold_slice(),
self.generators.h_bold_slice(),
self.generators.h(),
)
{
Err(AcError::InconsistentWitness)?;
}
}
for constraint in &self.constraints {
let eval =
constraint
.WL
.iter()
.map(|(i, weight)| *weight * witness.aL[*i])
.chain(constraint.WR.iter().map(|(i, weight)| *weight * witness.aR[*i]))
.chain(constraint.WO.iter().map(|(i, weight)| *weight * witness.aO[*i]))
.chain(
constraint.WCG.iter().zip(&witness.c).flat_map(|(weights, c)| {
weights.iter().map(|(j, weight)| *weight * c.g_values[*j])
}),
)
.chain(
constraint.WCH.iter().zip(&witness.c).flat_map(|(weights, c)| {
weights.iter().map(|(j, weight)| *weight * c.h_values[*j])
}),
)
.chain(constraint.WV.iter().map(|(i, weight)| *weight * witness.v[*i].value))
.chain(core::iter::once(constraint.c))
.sum::<C::F>();
if eval != C::F::ZERO {
Err(AcError::InconsistentWitness)?;
}
}
}
let alpha = C::F::random(&mut *rng);
let beta = C::F::random(&mut *rng);
let rho = C::F::random(&mut *rng);
let AI = {
let alg = witness.aL.0.iter().enumerate().map(|(i, aL)| (*aL, self.generators.g_bold(i)));
let arh = witness.aR.0.iter().enumerate().map(|(i, aR)| (*aR, self.generators.h_bold(i)));
let ah = core::iter::once((alpha, self.generators.h()));
let mut AI_terms = alg.chain(arh).chain(ah).collect::<Vec<_>>();
let AI = multiexp(&AI_terms);
AI_terms.zeroize();
AI
};
let AO = {
let aog = witness.aO.0.iter().enumerate().map(|(i, aO)| (*aO, self.generators.g_bold(i)));
let bh = core::iter::once((beta, self.generators.h()));
let mut AO_terms = aog.chain(bh).collect::<Vec<_>>();
let AO = multiexp(&AO_terms);
AO_terms.zeroize();
AO
};
let mut sL = ScalarVector(Vec::with_capacity(n));
let mut sR = ScalarVector(Vec::with_capacity(n));
for _ in 0 .. n {
sL.0.push(C::F::random(&mut *rng));
sR.0.push(C::F::random(&mut *rng));
}
let S = {
let slg = sL.0.iter().enumerate().map(|(i, sL)| (*sL, self.generators.g_bold(i)));
let srh = sR.0.iter().enumerate().map(|(i, sR)| (*sR, self.generators.h_bold(i)));
let rh = core::iter::once((rho, self.generators.h()));
let mut S_terms = slg.chain(srh).chain(rh).collect::<Vec<_>>();
let S = multiexp(&S_terms);
S_terms.zeroize();
S
};
transcript.push_point(AI);
transcript.push_point(AO);
transcript.push_point(S);
let y = transcript.challenge();
let z = transcript.challenge();
let YzChallenges { y_inv, z } = self.yz_challenges(y, z);
let y = ScalarVector::powers(y, n);
// t is a n'-term polynomial
// While Bulletproofs discuss it as a 6-term polynomial, Generalized Bulletproofs re-defines it
// as `2(n' + 1)`-term, where `n'` is `2 (c + 1)`.
// When `c = 0`, `n' = 2`, and t is `6` (which lines up with Bulletproofs having a 6-term
// polynomial).
// ni = n'
let ni = 2 * (c + 1);
// These indexes are from the Generalized Bulletproofs paper
#[rustfmt::skip]
let ilr = ni / 2; // 1 if c = 0
#[rustfmt::skip]
let io = ni; // 2 if c = 0
#[rustfmt::skip]
let is = ni + 1; // 3 if c = 0
#[rustfmt::skip]
let jlr = ni / 2; // 1 if c = 0
#[rustfmt::skip]
let jo = 0; // 0 if c = 0
#[rustfmt::skip]
let js = ni + 1; // 3 if c = 0
// If c = 0, these indexes perfectly align with the stated powers of X from the Bulletproofs
// paper for the following coefficients
// Declare the l and r polynomials, assigning the traditional coefficients to their positions
let mut l = vec![];
let mut r = vec![];
for _ in 0 .. (is + 1) {
l.push(ScalarVector::new(0));
r.push(ScalarVector::new(0));
}
let mut l_weights = ScalarVector::new(n);
let mut r_weights = ScalarVector::new(n);
let mut o_weights = ScalarVector::new(n);
for (constraint, z) in self.constraints.iter().zip(&z.0) {
accumulate_vector(&mut l_weights, &constraint.WL, *z);
accumulate_vector(&mut r_weights, &constraint.WR, *z);
accumulate_vector(&mut o_weights, &constraint.WO, *z);
}
l[ilr] = (r_weights * &y_inv) + &witness.aL;
l[io] = witness.aO.clone();
l[is] = sL;
r[jlr] = l_weights + &(witness.aR.clone() * &y);
r[jo] = o_weights - &y;
r[js] = sR * &y;
// Pad as expected
for l in &mut l {
debug_assert!((l.len() == 0) || (l.len() == n));
if l.len() == 0 {
*l = ScalarVector::new(n);
}
}
for r in &mut r {
debug_assert!((r.len() == 0) || (r.len() == n));
if r.len() == 0 {
*r = ScalarVector::new(n);
}
}
// We now fill in the vector commitments
// We use unused coefficients of l increasing from 0 (skipping ilr), and unused coefficients of
// r decreasing from n' (skipping jlr)
let mut cg_weights = Vec::with_capacity(witness.c.len());
let mut ch_weights = Vec::with_capacity(witness.c.len());
for i in 0 .. witness.c.len() {
let mut cg = ScalarVector::new(n);
let mut ch = ScalarVector::new(n);
for (constraint, z) in self.constraints.iter().zip(&z.0) {
if let Some(WCG) = constraint.WCG.get(i) {
accumulate_vector(&mut cg, WCG, *z);
}
if let Some(WCH) = constraint.WCH.get(i) {
accumulate_vector(&mut ch, WCH, *z);
}
}
cg_weights.push(cg);
ch_weights.push(ch);
}
for (i, (c, (cg_weights, ch_weights))) in
witness.c.iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate()
{
let i = i + 1;
let j = ni - i;
l[i] = c.g_values.clone();
l[j] = ch_weights * &y_inv;
r[j] = cg_weights;
r[i] = (c.h_values.clone() * &y) + &r[i];
}
// Multiply them to obtain t
let mut t = ScalarVector::new(1 + (2 * (l.len() - 1)));
for (i, l) in l.iter().enumerate() {
for (j, r) in r.iter().enumerate() {
let new_coeff = i + j;
t[new_coeff] += l.inner_product(r.0.iter());
}
}
// Per Bulletproofs, calculate masks tau for each t where (i > 0) && (i != 2)
// Per Generalized Bulletproofs, calculate masks tau for each t where i != n'
// With Bulletproofs, t[0] is zero, hence its omission, yet Generalized Bulletproofs uses it
let mut tau_before_ni = vec![];
for _ in 0 .. ni {
tau_before_ni.push(C::F::random(&mut *rng));
}
let mut tau_after_ni = vec![];
for _ in 0 .. t.0[(ni + 1) ..].len() {
tau_after_ni.push(C::F::random(&mut *rng));
}
// Calculate commitments to the coefficients of t, blinded by tau
debug_assert_eq!(t.0[0 .. ni].len(), tau_before_ni.len());
for (t, tau) in t.0[0 .. ni].iter().zip(tau_before_ni.iter()) {
transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())]));
}
debug_assert_eq!(t.0[(ni + 1) ..].len(), tau_after_ni.len());
for (t, tau) in t.0[(ni + 1) ..].iter().zip(tau_after_ni.iter()) {
transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())]));
}
let x: ScalarVector<C::F> = ScalarVector::powers(transcript.challenge(), t.len());
let poly_eval = |poly: &[ScalarVector<C::F>], x: &ScalarVector<_>| -> ScalarVector<_> {
let mut res = ScalarVector::<C::F>::new(poly[0].0.len());
for (i, coeff) in poly.iter().enumerate() {
res = res + &(coeff.clone() * x[i]);
}
res
};
let l = poly_eval(&l, &x);
let r = poly_eval(&r, &x);
let t_caret = l.inner_product(r.0.iter());
let mut V_weights = ScalarVector::new(self.V.len());
for (constraint, z) in self.constraints.iter().zip(&z.0) {
// We use `-z`, not `z`, as we write our constraint as `... + WV V = 0` not `= WV V + ..`
// This means we need to subtract `WV V` from both sides, which we accomplish here
accumulate_vector(&mut V_weights, &constraint.WV, -*z);
}
let tau_x = {
let mut tau_x_poly = vec![];
tau_x_poly.extend(tau_before_ni);
tau_x_poly.push(V_weights.inner_product(witness.v.iter().map(|v| &v.mask)));
tau_x_poly.extend(tau_after_ni);
let mut tau_x = C::F::ZERO;
for (i, coeff) in tau_x_poly.into_iter().enumerate() {
tau_x += coeff * x[i];
}
tau_x
};
// Calculate u for the powers of x variable to ilr/io/is
let u = {
// Calculate the first part of u
let mut u = (alpha * x[ilr]) + (beta * x[io]) + (rho * x[is]);
// Incorporate the commitment masks multiplied by the associated power of x
for (i, commitment) in witness.c.iter().enumerate() {
let i = i + 1;
u += x[i] * commitment.mask;
}
u
};
// Use the Inner-Product argument to prove for this
// P = t_caret * g + l * g_bold + r * (y_inv * h_bold)
let mut P_terms = Vec::with_capacity(1 + (2 * self.generators.len()));
debug_assert_eq!(l.len(), r.len());
for (i, (l, r)) in l.0.iter().zip(r.0.iter()).enumerate() {
P_terms.push((*l, self.generators.g_bold(i)));
P_terms.push((y_inv[i] * r, self.generators.h_bold(i)));
}
// Protocol 1, inlined, since our IpStatement is for Protocol 2
transcript.push_scalar(tau_x);
transcript.push_scalar(u);
transcript.push_scalar(t_caret);
let ip_x = transcript.challenge();
P_terms.push((ip_x * t_caret, self.generators.g()));
IpStatement::new(
self.generators,
y_inv,
ip_x,
// Safe since IpStatement isn't a ZK proof
P::Prover(multiexp_vartime(&P_terms)),
)
.unwrap()
.prove(transcript, IpWitness::new(l, r).unwrap())
.map_err(AcError::Ip)
}
/// Verify a proof for this statement.
pub fn verify<R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<C>,
transcript: &mut VerifierTranscript,
) -> Result<(), AcError> {
let n = self.n();
let c = self.c();
let ni = 2 * (c + 1);
let ilr = ni / 2;
let io = ni;
let is = ni + 1;
let jlr = ni / 2;
let l_r_poly_len = 1 + ni + 1;
let t_poly_len = (2 * l_r_poly_len) - 1;
let AI = transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?;
let AO = transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?;
let S = transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?;
let y = transcript.challenge();
let z = transcript.challenge();
let YzChallenges { y_inv, z } = self.yz_challenges(y, z);
let mut l_weights = ScalarVector::new(n);
let mut r_weights = ScalarVector::new(n);
let mut o_weights = ScalarVector::new(n);
for (constraint, z) in self.constraints.iter().zip(&z.0) {
accumulate_vector(&mut l_weights, &constraint.WL, *z);
accumulate_vector(&mut r_weights, &constraint.WR, *z);
accumulate_vector(&mut o_weights, &constraint.WO, *z);
}
let r_weights = r_weights * &y_inv;
let delta = r_weights.inner_product(l_weights.0.iter());
let mut T_before_ni = Vec::with_capacity(ni);
let mut T_after_ni = Vec::with_capacity(t_poly_len - ni - 1);
for _ in 0 .. ni {
T_before_ni.push(transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?);
}
for _ in 0 .. (t_poly_len - ni - 1) {
T_after_ni.push(transcript.read_point::<C>().map_err(|_| AcError::IncompleteProof)?);
}
let x: ScalarVector<C::F> = ScalarVector::powers(transcript.challenge(), t_poly_len);
let tau_x = transcript.read_scalar::<C>().map_err(|_| AcError::IncompleteProof)?;
let u = transcript.read_scalar::<C>().map_err(|_| AcError::IncompleteProof)?;
let t_caret = transcript.read_scalar::<C>().map_err(|_| AcError::IncompleteProof)?;
// Lines 88-90, modified per Generalized Bulletproofs as needed w.r.t. t
{
let verifier_weight = C::F::random(&mut *rng);
// lhs of the equation, weighted to enable batch verification
verifier.g += t_caret * verifier_weight;
verifier.h += tau_x * verifier_weight;
let mut V_weights = ScalarVector::new(self.V.len());
for (constraint, z) in self.constraints.iter().zip(&z.0) {
// We use `-z`, not `z`, as we write our constraint as `... + WV V = 0` not `= WV V + ..`
// This means we need to subtract `WV V` from both sides, which we accomplish here
accumulate_vector(&mut V_weights, &constraint.WV, -*z);
}
V_weights = V_weights * x[ni];
// rhs of the equation, negated to cause a sum to zero
// `delta - z...`, instead of `delta + z...`, is done for the same reason as in the above WV
// matrix transform
verifier.g -= verifier_weight *
x[ni] *
(delta - z.inner_product(self.constraints.iter().map(|constraint| &constraint.c)));
for pair in V_weights.0.into_iter().zip(self.V.0) {
verifier.additional.push((-verifier_weight * pair.0, pair.1));
}
for (i, T) in T_before_ni.into_iter().enumerate() {
verifier.additional.push((-verifier_weight * x[i], T));
}
for (i, T) in T_after_ni.into_iter().enumerate() {
verifier.additional.push((-verifier_weight * x[ni + 1 + i], T));
}
}
let verifier_weight = C::F::random(&mut *rng);
// Multiply `x` by `verifier_weight` as this effects `verifier_weight` onto most scalars and
// saves a notable amount of operations
let x = x * verifier_weight;
// This following block effectively calculates P, within the multiexp
{
verifier.additional.push((x[ilr], AI));
verifier.additional.push((x[io], AO));
// h' ** y is equivalent to h as h' is h ** y_inv
let mut log2_n = 0;
while (1 << log2_n) != n {
log2_n += 1;
}
verifier.h_sum[log2_n] -= verifier_weight;
verifier.additional.push((x[is], S));
// Lines 85-87 calculate WL, WR, WO
// We preserve them in terms of g_bold and h_bold for a more efficient multiexp
let mut h_bold_scalars = l_weights * x[jlr];
for (i, wr) in (r_weights * x[jlr]).0.into_iter().enumerate() {
verifier.g_bold[i] += wr;
}
// WO is weighted by x**jo where jo == 0, hence why we can ignore the x term
h_bold_scalars = h_bold_scalars + &(o_weights * verifier_weight);
let mut cg_weights = Vec::with_capacity(self.C.len());
let mut ch_weights = Vec::with_capacity(self.C.len());
for i in 0 .. self.C.len() {
let mut cg = ScalarVector::new(n);
let mut ch = ScalarVector::new(n);
for (constraint, z) in self.constraints.iter().zip(&z.0) {
if let Some(WCG) = constraint.WCG.get(i) {
accumulate_vector(&mut cg, WCG, *z);
}
if let Some(WCH) = constraint.WCH.get(i) {
accumulate_vector(&mut ch, WCH, *z);
}
}
cg_weights.push(cg);
ch_weights.push(ch);
}
// Push the terms for C, which increment from 0, and the terms for WC, which decrement from
// n'
for (i, (C, (WCG, WCH))) in
self.C.0.into_iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate()
{
let i = i + 1;
let j = ni - i;
verifier.additional.push((x[i], C));
h_bold_scalars = h_bold_scalars + &(WCG * x[j]);
for (i, scalar) in (WCH * &y_inv * x[j]).0.into_iter().enumerate() {
verifier.g_bold[i] += scalar;
}
}
// All terms for h_bold here have actually been for h_bold', h_bold * y_inv
h_bold_scalars = h_bold_scalars * &y_inv;
for (i, scalar) in h_bold_scalars.0.into_iter().enumerate() {
verifier.h_bold[i] += scalar;
}
// Remove u * h from P
verifier.h -= verifier_weight * u;
}
// Prove for lines 88, 92 with an Inner-Product statement
// This inlines Protocol 1, as our IpStatement implements Protocol 2
let ip_x = transcript.challenge();
// P is amended with this additional term
verifier.g += verifier_weight * ip_x * t_caret;
IpStatement::new(self.generators, y_inv, ip_x, P::Verifier { verifier_weight })
.unwrap()
.verify(verifier, transcript)
.map_err(AcError::Ip)?;
Ok(())
}
}

View File

@@ -0,0 +1,360 @@
use multiexp::multiexp_vartime;
use ciphersuite::{group::ff::Field, Ciphersuite};
#[rustfmt::skip]
use crate::{ScalarVector, PointVector, ProofGenerators, BatchVerifier, transcript::*, padded_pow_of_2};
/// An error from proving/verifying Inner-Product statements.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum IpError {
/// An incorrect amount of generators was provided.
IncorrectAmountOfGenerators,
/// The witness was inconsistent to the statement.
///
/// Sanity checks on the witness are always performed. If the library is compiled with debug
/// assertions on, whether or not this witness actually opens `P` is checked.
InconsistentWitness,
/// The proof wasn't complete and the necessary values could not be read from the transcript.
IncompleteProof,
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub(crate) enum P<C: Ciphersuite> {
Verifier { verifier_weight: C::F },
Prover(C::G),
}
/// The Bulletproofs Inner-Product statement.
///
/// This is for usage with Protocol 2 from the Bulletproofs paper.
#[derive(Clone, Debug)]
pub(crate) struct IpStatement<'a, C: Ciphersuite> {
generators: ProofGenerators<'a, C>,
// Weights for h_bold
h_bold_weights: ScalarVector<C::F>,
// u as the discrete logarithm of G
u: C::F,
// P
P: P<C>,
}
/// The witness for the Bulletproofs Inner-Product statement.
#[derive(Clone, Debug)]
pub(crate) struct IpWitness<C: Ciphersuite> {
// a
a: ScalarVector<C::F>,
// b
b: ScalarVector<C::F>,
}
impl<C: Ciphersuite> IpWitness<C> {
/// Construct a new witness for an Inner-Product statement.
///
/// If the witness is less than a power of two, it is padded to the nearest power of two.
///
/// This functions return None if the lengths of a, b are mismatched or either are empty.
pub(crate) fn new(mut a: ScalarVector<C::F>, mut b: ScalarVector<C::F>) -> Option<Self> {
if a.0.is_empty() || (a.len() != b.len()) {
None?;
}
// Pad to the nearest power of 2
let missing = padded_pow_of_2(a.len()) - a.len();
a.0.reserve(missing);
b.0.reserve(missing);
for _ in 0 .. missing {
a.0.push(C::F::ZERO);
b.0.push(C::F::ZERO);
}
Some(Self { a, b })
}
}
impl<'a, C: Ciphersuite> IpStatement<'a, C> {
/// Create a new Inner-Product statement.
///
/// This does not perform any transcripting of any variables within this statement. They must be
/// deterministic to the existing transcript.
pub(crate) fn new(
generators: ProofGenerators<'a, C>,
h_bold_weights: ScalarVector<C::F>,
u: C::F,
P: P<C>,
) -> Result<Self, IpError> {
if generators.h_bold_slice().len() != h_bold_weights.len() {
Err(IpError::IncorrectAmountOfGenerators)?
}
Ok(Self { generators, h_bold_weights, u, P })
}
/// Prove for this Inner-Product statement.
///
/// Returns an error if this statement couldn't be proven for (such as if the witness isn't
/// consistent).
pub(crate) fn prove(
self,
transcript: &mut Transcript,
witness: IpWitness<C>,
) -> Result<(), IpError> {
let (mut g_bold, mut h_bold, u, mut P, mut a, mut b) = {
let IpStatement { generators, h_bold_weights, u, P } = self;
let u = generators.g() * u;
// Ensure we have the exact amount of generators
if generators.g_bold_slice().len() != witness.a.len() {
Err(IpError::IncorrectAmountOfGenerators)?;
}
// Acquire a local copy of the generators
let g_bold = PointVector::<C>(generators.g_bold_slice().to_vec());
let h_bold = PointVector::<C>(generators.h_bold_slice().to_vec()).mul_vec(&h_bold_weights);
let IpWitness { a, b } = witness;
let P = match P {
P::Prover(point) => point,
P::Verifier { .. } => {
panic!("prove called with a P specification which was for the verifier")
}
};
// Ensure this witness actually opens this statement
#[cfg(debug_assertions)]
{
let ag = a.0.iter().cloned().zip(g_bold.0.iter().cloned());
let bh = b.0.iter().cloned().zip(h_bold.0.iter().cloned());
let cu = core::iter::once((a.inner_product(b.0.iter()), u));
if P != multiexp_vartime(&ag.chain(bh).chain(cu).collect::<Vec<_>>()) {
Err(IpError::InconsistentWitness)?;
}
}
(g_bold, h_bold, u, P, a, b)
};
// `else: (n > 1)` case, lines 18-35 of the Bulletproofs paper
// This interprets `g_bold.len()` as `n`
while g_bold.len() > 1 {
// Split a, b, g_bold, h_bold as needed for lines 20-24
let (a1, a2) = a.clone().split();
let (b1, b2) = b.clone().split();
let (g_bold1, g_bold2) = g_bold.split();
let (h_bold1, h_bold2) = h_bold.split();
let n_hat = g_bold1.len();
// Sanity
debug_assert_eq!(a1.len(), n_hat);
debug_assert_eq!(a2.len(), n_hat);
debug_assert_eq!(b1.len(), n_hat);
debug_assert_eq!(b2.len(), n_hat);
debug_assert_eq!(g_bold1.len(), n_hat);
debug_assert_eq!(g_bold2.len(), n_hat);
debug_assert_eq!(h_bold1.len(), n_hat);
debug_assert_eq!(h_bold2.len(), n_hat);
// cl, cr, lines 21-22
let cl = a1.inner_product(b2.0.iter());
let cr = a2.inner_product(b1.0.iter());
let L = {
let mut L_terms = Vec::with_capacity(1 + (2 * g_bold1.len()));
for (a, g) in a1.0.iter().zip(g_bold2.0.iter()) {
L_terms.push((*a, *g));
}
for (b, h) in b2.0.iter().zip(h_bold1.0.iter()) {
L_terms.push((*b, *h));
}
L_terms.push((cl, u));
// Uses vartime since this isn't a ZK proof
multiexp_vartime(&L_terms)
};
let R = {
let mut R_terms = Vec::with_capacity(1 + (2 * g_bold1.len()));
for (a, g) in a2.0.iter().zip(g_bold1.0.iter()) {
R_terms.push((*a, *g));
}
for (b, h) in b1.0.iter().zip(h_bold2.0.iter()) {
R_terms.push((*b, *h));
}
R_terms.push((cr, u));
multiexp_vartime(&R_terms)
};
// Now that we've calculate L, R, transcript them to receive x (26-27)
transcript.push_point(L);
transcript.push_point(R);
let x: C::F = transcript.challenge();
let x_inv = x.invert().unwrap();
// The prover and verifier now calculate the following (28-31)
g_bold = PointVector(Vec::with_capacity(g_bold1.len()));
for (a, b) in g_bold1.0.into_iter().zip(g_bold2.0.into_iter()) {
g_bold.0.push(multiexp_vartime(&[(x_inv, a), (x, b)]));
}
h_bold = PointVector(Vec::with_capacity(h_bold1.len()));
for (a, b) in h_bold1.0.into_iter().zip(h_bold2.0.into_iter()) {
h_bold.0.push(multiexp_vartime(&[(x, a), (x_inv, b)]));
}
P = (L * (x * x)) + P + (R * (x_inv * x_inv));
// 32-34
a = (a1 * x) + &(a2 * x_inv);
b = (b1 * x_inv) + &(b2 * x);
}
// `if n = 1` case from line 14-17
// Sanity
debug_assert_eq!(g_bold.len(), 1);
debug_assert_eq!(h_bold.len(), 1);
debug_assert_eq!(a.len(), 1);
debug_assert_eq!(b.len(), 1);
// We simply send a/b
transcript.push_scalar(a[0]);
transcript.push_scalar(b[0]);
Ok(())
}
/*
This has room for optimization worth investigating further. It currently takes
an iterative approach. It can be optimized further via divide and conquer.
Assume there are 4 challenges.
Iterative approach (current):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across that result and column 2.
3. Do the optimal multiplications across that result and column 3.
Divide and conquer (worth investigating further):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across challenge column 2 and 3.
3. Multiply both results together.
When there are 4 challenges (n=16), the iterative approach does 28 multiplications
versus divide and conquer's 24.
*/
fn challenge_products(challenges: &[(C::F, C::F)]) -> Vec<C::F> {
let mut products = vec![C::F::ONE; 1 << challenges.len()];
if !challenges.is_empty() {
products[0] = challenges[0].1;
products[1] = challenges[0].0;
for (j, challenge) in challenges.iter().enumerate().skip(1) {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * challenge.0;
products[slots - 1] = products[slots / 2] * challenge.1;
slots = slots.saturating_sub(2);
}
}
// Sanity check since if the above failed to populate, it'd be critical
for product in &products {
debug_assert!(!bool::from(product.is_zero()));
}
}
products
}
/// Queue an Inner-Product proof for batch verification.
///
/// This will return Err if there is an error. This will return Ok if the proof was successfully
/// queued for batch verification. The caller is required to verify the batch in order to ensure
/// the proof is actually correct.
pub(crate) fn verify(
self,
verifier: &mut BatchVerifier<C>,
transcript: &mut VerifierTranscript,
) -> Result<(), IpError> {
let IpStatement { generators, h_bold_weights, u, P } = self;
// Calculate the discrete log w.r.t. 2 for the amount of generators present
let mut lr_len = 0;
while (1 << lr_len) < generators.g_bold_slice().len() {
lr_len += 1;
}
let weight = match P {
P::Prover(_) => panic!("prove called with a P specification which was for the prover"),
P::Verifier { verifier_weight } => verifier_weight,
};
// Again, we start with the `else: (n > 1)` case
// We need x, x_inv per lines 25-27 for lines 28-31
let mut L = Vec::with_capacity(lr_len);
let mut R = Vec::with_capacity(lr_len);
let mut xs: Vec<C::F> = Vec::with_capacity(lr_len);
for _ in 0 .. lr_len {
L.push(transcript.read_point::<C>().map_err(|_| IpError::IncompleteProof)?);
R.push(transcript.read_point::<C>().map_err(|_| IpError::IncompleteProof)?);
xs.push(transcript.challenge());
}
// We calculate their inverse in batch
let mut x_invs = xs.clone();
{
let mut scratch = vec![C::F::ZERO; x_invs.len()];
ciphersuite::group::ff::BatchInverter::invert_with_external_scratch(
&mut x_invs,
&mut scratch,
);
}
// Now, with x and x_inv, we need to calculate g_bold', h_bold', P'
//
// For the sake of performance, we solely want to calculate all of these in terms of scalings
// for g_bold, h_bold, P, and don't want to actually perform intermediary scalings of the
// points
//
// L and R are easy, as it's simply x**2, x**-2
//
// For the series of g_bold, h_bold, we use the `challenge_products` function
// For how that works, please see its own documentation
let product_cache = {
let mut challenges = Vec::with_capacity(lr_len);
let x_iter = xs.into_iter().zip(x_invs);
let lr_iter = L.into_iter().zip(R);
for ((x, x_inv), (L, R)) in x_iter.zip(lr_iter) {
challenges.push((x, x_inv));
verifier.additional.push((weight * x.square(), L));
verifier.additional.push((weight * x_inv.square(), R));
}
Self::challenge_products(&challenges)
};
// And now for the `if n = 1` case
let a = transcript.read_scalar::<C>().map_err(|_| IpError::IncompleteProof)?;
let b = transcript.read_scalar::<C>().map_err(|_| IpError::IncompleteProof)?;
let c = a * b;
// The multiexp of these terms equate to the final permutation of P
// We now add terms for a * g_bold' + b * h_bold' b + c * u, with the scalars negative such
// that the terms sum to 0 for an honest prover
// The g_bold * a term case from line 16
#[allow(clippy::needless_range_loop)]
for i in 0 .. generators.g_bold_slice().len() {
verifier.g_bold[i] -= weight * product_cache[i] * a;
}
// The h_bold * b term case from line 16
for i in 0 .. generators.h_bold_slice().len() {
verifier.h_bold[i] -=
weight * product_cache[product_cache.len() - 1 - i] * b * h_bold_weights[i];
}
// The c * u term case from line 16
verifier.g -= weight * c * u;
Ok(())
}
}

View File

@@ -0,0 +1,328 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![allow(non_snake_case)]
use core::fmt;
use std::collections::HashSet;
use zeroize::Zeroize;
use multiexp::{multiexp, multiexp_vartime};
use ciphersuite::{
group::{ff::Field, Group, GroupEncoding},
Ciphersuite,
};
mod scalar_vector;
pub use scalar_vector::ScalarVector;
mod point_vector;
pub use point_vector::PointVector;
/// The transcript formats.
pub mod transcript;
pub(crate) mod inner_product;
pub(crate) mod lincomb;
/// The arithmetic circuit proof.
pub mod arithmetic_circuit_proof;
/// Functionlity useful when testing.
#[cfg(any(test, feature = "tests"))]
pub mod tests;
/// Calculate the nearest power of two greater than or equivalent to the argument.
pub(crate) fn padded_pow_of_2(i: usize) -> usize {
let mut next_pow_of_2 = 1;
while next_pow_of_2 < i {
next_pow_of_2 <<= 1;
}
next_pow_of_2
}
/// An error from working with generators.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum GeneratorsError {
/// The provided list of generators for `g` (bold) was empty.
GBoldEmpty,
/// The provided list of generators for `h` (bold) did not match `g` (bold) in length.
DifferingGhBoldLengths,
/// The amount of provided generators were not a power of two.
NotPowerOfTwo,
/// A generator was used multiple times.
DuplicatedGenerator,
}
/// A full set of generators.
#[derive(Clone)]
pub struct Generators<C: Ciphersuite> {
g: C::G,
h: C::G,
g_bold: Vec<C::G>,
h_bold: Vec<C::G>,
h_sum: Vec<C::G>,
}
/// A batch verifier of proofs.
#[must_use]
#[derive(Clone)]
pub struct BatchVerifier<C: Ciphersuite> {
g: C::F,
h: C::F,
g_bold: Vec<C::F>,
h_bold: Vec<C::F>,
h_sum: Vec<C::F>,
additional: Vec<(C::F, C::G)>,
}
impl<C: Ciphersuite> fmt::Debug for Generators<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
let g = self.g.to_bytes();
let g: &[u8] = g.as_ref();
let h = self.h.to_bytes();
let h: &[u8] = h.as_ref();
fmt.debug_struct("Generators").field("g", &g).field("h", &h).finish_non_exhaustive()
}
}
/// The generators for a specific proof.
///
/// This potentially have been reduced in size from the original set of generators, as beneficial
/// to performance.
#[derive(Copy, Clone)]
pub struct ProofGenerators<'a, C: Ciphersuite> {
g: &'a C::G,
h: &'a C::G,
g_bold: &'a [C::G],
h_bold: &'a [C::G],
}
impl<C: Ciphersuite> fmt::Debug for ProofGenerators<'_, C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
let g = self.g.to_bytes();
let g: &[u8] = g.as_ref();
let h = self.h.to_bytes();
let h: &[u8] = h.as_ref();
fmt.debug_struct("ProofGenerators").field("g", &g).field("h", &h).finish_non_exhaustive()
}
}
impl<C: Ciphersuite> Generators<C> {
/// Construct an instance of Generators for usage with Bulletproofs.
pub fn new(
g: C::G,
h: C::G,
g_bold: Vec<C::G>,
h_bold: Vec<C::G>,
) -> Result<Self, GeneratorsError> {
if g_bold.is_empty() {
Err(GeneratorsError::GBoldEmpty)?;
}
if g_bold.len() != h_bold.len() {
Err(GeneratorsError::DifferingGhBoldLengths)?;
}
if padded_pow_of_2(g_bold.len()) != g_bold.len() {
Err(GeneratorsError::NotPowerOfTwo)?;
}
let mut set = HashSet::new();
let mut add_generator = |generator: &C::G| {
assert!(!bool::from(generator.is_identity()));
let bytes = generator.to_bytes();
!set.insert(bytes.as_ref().to_vec())
};
assert!(!add_generator(&g), "g was prior present in empty set");
if add_generator(&h) {
Err(GeneratorsError::DuplicatedGenerator)?;
}
for g in &g_bold {
if add_generator(g) {
Err(GeneratorsError::DuplicatedGenerator)?;
}
}
for h in &h_bold {
if add_generator(h) {
Err(GeneratorsError::DuplicatedGenerator)?;
}
}
let mut running_h_sum = C::G::identity();
let mut h_sum = vec![];
let mut next_pow_of_2 = 1;
for (i, h) in h_bold.iter().enumerate() {
running_h_sum += h;
if (i + 1) == next_pow_of_2 {
h_sum.push(running_h_sum);
next_pow_of_2 *= 2;
}
}
Ok(Generators { g, h, g_bold, h_bold, h_sum })
}
/// Create a BatchVerifier for proofs which use these generators.
pub fn batch_verifier(&self) -> BatchVerifier<C> {
BatchVerifier {
g: C::F::ZERO,
h: C::F::ZERO,
g_bold: vec![C::F::ZERO; self.g_bold.len()],
h_bold: vec![C::F::ZERO; self.h_bold.len()],
h_sum: vec![C::F::ZERO; self.h_sum.len()],
additional: Vec::with_capacity(128),
}
}
/// Verify all proofs queued for batch verification in this BatchVerifier.
#[must_use]
pub fn verify(&self, verifier: BatchVerifier<C>) -> bool {
multiexp_vartime(
&[(verifier.g, self.g), (verifier.h, self.h)]
.into_iter()
.chain(verifier.g_bold.into_iter().zip(self.g_bold.iter().cloned()))
.chain(verifier.h_bold.into_iter().zip(self.h_bold.iter().cloned()))
.chain(verifier.h_sum.into_iter().zip(self.h_sum.iter().cloned()))
.chain(verifier.additional)
.collect::<Vec<_>>(),
)
.is_identity()
.into()
}
/// The `g` generator.
pub fn g(&self) -> C::G {
self.g
}
/// The `h` generator.
pub fn h(&self) -> C::G {
self.h
}
/// A slice to view the `g` (bold) generators.
pub fn g_bold_slice(&self) -> &[C::G] {
&self.g_bold
}
/// A slice to view the `h` (bold) generators.
pub fn h_bold_slice(&self) -> &[C::G] {
&self.h_bold
}
/// Reduce a set of generators to the quantity necessary to support a certain amount of
/// in-circuit multiplications/terms in a Pedersen vector commitment.
///
/// Returns None if reducing to 0 or if the generators reduced are insufficient to provide this
/// many generators.
pub fn reduce(&self, generators: usize) -> Option<ProofGenerators<'_, C>> {
if generators == 0 {
None?;
}
// Round to the nearest power of 2
let generators = padded_pow_of_2(generators);
if generators > self.g_bold.len() {
None?;
}
Some(ProofGenerators {
g: &self.g,
h: &self.h,
g_bold: &self.g_bold[.. generators],
h_bold: &self.h_bold[.. generators],
})
}
}
impl<'a, C: Ciphersuite> ProofGenerators<'a, C> {
pub(crate) fn len(&self) -> usize {
self.g_bold.len()
}
pub(crate) fn g(&self) -> C::G {
*self.g
}
pub(crate) fn h(&self) -> C::G {
*self.h
}
pub(crate) fn g_bold(&self, i: usize) -> C::G {
self.g_bold[i]
}
pub(crate) fn h_bold(&self, i: usize) -> C::G {
self.h_bold[i]
}
pub(crate) fn g_bold_slice(&self) -> &[C::G] {
self.g_bold
}
pub(crate) fn h_bold_slice(&self) -> &[C::G] {
self.h_bold
}
}
/// The opening of a Pedersen commitment.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct PedersenCommitment<C: Ciphersuite> {
/// The value committed to.
pub value: C::F,
/// The mask blinding the value committed to.
pub mask: C::F,
}
impl<C: Ciphersuite> PedersenCommitment<C> {
/// Commit to this value, yielding the Pedersen commitment.
pub fn commit(&self, g: C::G, h: C::G) -> C::G {
multiexp(&[(self.value, g), (self.mask, h)])
}
}
/// The opening of a Pedersen vector commitment.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct PedersenVectorCommitment<C: Ciphersuite> {
/// The values committed to across the `g` (bold) generators.
pub g_values: ScalarVector<C::F>,
/// The values committed to across the `h` (bold) generators.
pub h_values: ScalarVector<C::F>,
/// The mask blinding the values committed to.
pub mask: C::F,
}
impl<C: Ciphersuite> PedersenVectorCommitment<C> {
/// Commit to the vectors of values.
///
/// This function returns None if the amount of generators is less than the amount of values
/// within the relevant vector.
pub fn commit(&self, g_bold: &[C::G], h_bold: &[C::G], h: C::G) -> Option<C::G> {
if (g_bold.len() < self.g_values.len()) || (h_bold.len() < self.h_values.len()) {
None?;
};
let mut terms = vec![(self.mask, h)];
for pair in self.g_values.0.iter().cloned().zip(g_bold.iter().cloned()) {
terms.push(pair);
}
for pair in self.h_values.0.iter().cloned().zip(h_bold.iter().cloned()) {
terms.push(pair);
}
let res = multiexp(&terms);
terms.zeroize();
Some(res)
}
}

View File

@@ -0,0 +1,265 @@
use core::ops::{Add, Sub, Mul};
use zeroize::Zeroize;
use ciphersuite::group::ff::PrimeField;
use crate::ScalarVector;
/// A reference to a variable usable within linear combinations.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[allow(non_camel_case_types)]
pub enum Variable {
/// A variable within the left vector of vectors multiplied against each other.
aL(usize),
/// A variable within the right vector of vectors multiplied against each other.
aR(usize),
/// A variable within the output vector of the left vector multiplied by the right vector.
aO(usize),
/// A variable within a Pedersen vector commitment, committed to with a generator from `g` (bold).
CG {
/// The commitment being indexed.
commitment: usize,
/// The index of the variable.
index: usize,
},
/// A variable within a Pedersen vector commitment, committed to with a generator from `h` (bold).
CH {
/// The commitment being indexed.
commitment: usize,
/// The index of the variable.
index: usize,
},
/// A variable within a Pedersen commitment.
V(usize),
}
// Does a NOP as there shouldn't be anything critical here
impl Zeroize for Variable {
fn zeroize(&mut self) {}
}
/// A linear combination.
///
/// Specifically, `WL aL + WR aR + WO aO + WCG C_G + WCH C_H + WV V + c`.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
#[must_use]
pub struct LinComb<F: PrimeField> {
pub(crate) highest_a_index: Option<usize>,
pub(crate) highest_c_index: Option<usize>,
pub(crate) highest_v_index: Option<usize>,
// Sparse representation of WL/WR/WO
pub(crate) WL: Vec<(usize, F)>,
pub(crate) WR: Vec<(usize, F)>,
pub(crate) WO: Vec<(usize, F)>,
// Sparse representation once within a commitment
pub(crate) WCG: Vec<Vec<(usize, F)>>,
pub(crate) WCH: Vec<Vec<(usize, F)>>,
// Sparse representation of WV
pub(crate) WV: Vec<(usize, F)>,
pub(crate) c: F,
}
impl<F: PrimeField> From<Variable> for LinComb<F> {
fn from(constrainable: Variable) -> LinComb<F> {
LinComb::empty().term(F::ONE, constrainable)
}
}
impl<F: PrimeField> Add<&LinComb<F>> for LinComb<F> {
type Output = Self;
fn add(mut self, constraint: &Self) -> Self {
self.highest_a_index = self.highest_a_index.max(constraint.highest_a_index);
self.highest_c_index = self.highest_c_index.max(constraint.highest_c_index);
self.highest_v_index = self.highest_v_index.max(constraint.highest_v_index);
self.WL.extend(&constraint.WL);
self.WR.extend(&constraint.WR);
self.WO.extend(&constraint.WO);
while self.WCG.len() < constraint.WCG.len() {
self.WCG.push(vec![]);
}
while self.WCH.len() < constraint.WCH.len() {
self.WCH.push(vec![]);
}
for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) {
sWC.extend(cWC);
}
for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) {
sWC.extend(cWC);
}
self.WV.extend(&constraint.WV);
self.c += constraint.c;
self
}
}
impl<F: PrimeField> Sub<&LinComb<F>> for LinComb<F> {
type Output = Self;
fn sub(mut self, constraint: &Self) -> Self {
self.highest_a_index = self.highest_a_index.max(constraint.highest_a_index);
self.highest_c_index = self.highest_c_index.max(constraint.highest_c_index);
self.highest_v_index = self.highest_v_index.max(constraint.highest_v_index);
self.WL.extend(constraint.WL.iter().map(|(i, weight)| (*i, -*weight)));
self.WR.extend(constraint.WR.iter().map(|(i, weight)| (*i, -*weight)));
self.WO.extend(constraint.WO.iter().map(|(i, weight)| (*i, -*weight)));
while self.WCG.len() < constraint.WCG.len() {
self.WCG.push(vec![]);
}
while self.WCH.len() < constraint.WCH.len() {
self.WCH.push(vec![]);
}
for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) {
sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight)));
}
for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) {
sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight)));
}
self.WV.extend(constraint.WV.iter().map(|(i, weight)| (*i, -*weight)));
self.c -= constraint.c;
self
}
}
impl<F: PrimeField> Mul<F> for LinComb<F> {
type Output = Self;
fn mul(mut self, scalar: F) -> Self {
for (_, weight) in self.WL.iter_mut() {
*weight *= scalar;
}
for (_, weight) in self.WR.iter_mut() {
*weight *= scalar;
}
for (_, weight) in self.WO.iter_mut() {
*weight *= scalar;
}
for WC in self.WCG.iter_mut() {
for (_, weight) in WC {
*weight *= scalar;
}
}
for WC in self.WCH.iter_mut() {
for (_, weight) in WC {
*weight *= scalar;
}
}
for (_, weight) in self.WV.iter_mut() {
*weight *= scalar;
}
self.c *= scalar;
self
}
}
impl<F: PrimeField> LinComb<F> {
/// Create an empty linear combination.
pub fn empty() -> Self {
Self {
highest_a_index: None,
highest_c_index: None,
highest_v_index: None,
WL: vec![],
WR: vec![],
WO: vec![],
WCG: vec![],
WCH: vec![],
WV: vec![],
c: F::ZERO,
}
}
/// Add a new instance of a term to this linear combination.
pub fn term(mut self, scalar: F, constrainable: Variable) -> Self {
match constrainable {
Variable::aL(i) => {
self.highest_a_index = self.highest_a_index.max(Some(i));
self.WL.push((i, scalar))
}
Variable::aR(i) => {
self.highest_a_index = self.highest_a_index.max(Some(i));
self.WR.push((i, scalar))
}
Variable::aO(i) => {
self.highest_a_index = self.highest_a_index.max(Some(i));
self.WO.push((i, scalar))
}
Variable::CG { commitment: i, index: j } => {
self.highest_c_index = self.highest_c_index.max(Some(i));
self.highest_a_index = self.highest_a_index.max(Some(j));
while self.WCG.len() <= i {
self.WCG.push(vec![]);
}
self.WCG[i].push((j, scalar))
}
Variable::CH { commitment: i, index: j } => {
self.highest_c_index = self.highest_c_index.max(Some(i));
self.highest_a_index = self.highest_a_index.max(Some(j));
while self.WCH.len() <= i {
self.WCH.push(vec![]);
}
self.WCH[i].push((j, scalar))
}
Variable::V(i) => {
self.highest_v_index = self.highest_v_index.max(Some(i));
self.WV.push((i, scalar));
}
};
self
}
/// Add to the constant c.
pub fn constant(mut self, scalar: F) -> Self {
self.c += scalar;
self
}
/// View the current weights for aL.
pub fn WL(&self) -> &[(usize, F)] {
&self.WL
}
/// View the current weights for aR.
pub fn WR(&self) -> &[(usize, F)] {
&self.WR
}
/// View the current weights for aO.
pub fn WO(&self) -> &[(usize, F)] {
&self.WO
}
/// View the current weights for CG.
pub fn WCG(&self) -> &[Vec<(usize, F)>] {
&self.WCG
}
/// View the current weights for CH.
pub fn WCH(&self) -> &[Vec<(usize, F)>] {
&self.WCH
}
/// View the current weights for V.
pub fn WV(&self) -> &[(usize, F)] {
&self.WV
}
/// View the current constant.
pub fn c(&self) -> F {
self.c
}
}
pub(crate) fn accumulate_vector<F: PrimeField>(
accumulator: &mut ScalarVector<F>,
values: &[(usize, F)],
weight: F,
) {
for (i, coeff) in values {
accumulator[*i] += *coeff * weight;
}
}

Some files were not shown because too many files have changed in this diff Show More