130 Commits

Author SHA1 Message Date
Luke Parker
c24768f922 Fix borks from the latest nightly
The `cargo doc` build started to fail with the rolling of `doc_auto_cfg` into
`doc_cfg`, so now we don't build docs for deps (as we can't reasonably update
`generic-array` at this time).

`home` has been patched as we are able to, not as a direct requirement of this
PR.
2025-11-04 13:10:11 -05:00
Luke Parker
5818f1a41c Update nightly version 2025-11-04 10:05:08 -05:00
Luke Parker
1b781b4b57 Fix CI 2025-10-07 04:39:32 -04:00
Luke Parker
63f7e220c0 Update macOS labels in CI due to deprecation of macos-13 2025-10-05 10:59:40 -04:00
Luke Parker
7d49366373 Move develop to patch-polkadot-sdk (#678)
* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
2025-10-05 10:58:08 -04:00
Luke Parker
55ed33d2d1 Update to a version of Substrate which no longer cites our fork of substrate-bip39 2025-09-30 19:30:40 -04:00
Luke Parker
0066b94d38 Update substrate-wasm-builder from serai/polkadot-sdk to serai/patch-polkadot-sdk
Steps towards allowing us to delete the `serai/polkadot-sdk` repository.
2025-09-01 21:07:11 -04:00
Luke Parker
7d54c02ec6 Update to latest nightly
Replaces #671 due to a lint being triggered.
2025-09-01 16:48:34 -04:00
Mohan
568324f631 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:58:59 -04:00
Luke Parker
eaa9a0e5a6 Pin actions in the pages workflow 2025-09-01 15:55:17 -04:00
Luke Parker
251996c1b0 Use solc 0.8.26
`next` already does, and it's annoying to have to consistently switch between
the two branches.
2025-09-01 15:55:06 -04:00
Luke Parker
98b9cc82a7 Fix Some(_) which should be Ok(_) 2025-09-01 15:42:47 -04:00
Luke Parker
f8adfb56ad Remove unwrap within debug assertion 2025-08-26 23:15:58 -04:00
Luke Parker
7a790f3a20 ff/alloc when ciphersuite/alloc 2025-08-23 11:00:05 -04:00
Luke Parker
a7c77f8b5f repr(transparent) on dalek_ff_group::FieldElement 2025-08-23 05:17:43 -04:00
Luke Parker
da3095ed15 Remove FieldElement::from_square
The new `FieldElement::from_u256` is sufficient to load an unreduced value. The
caller can perform the square themselves, without us explicitly supporting this
special case.

Updates the monero-oxide version used to one which no longer uses
`FieldElement::from_square` (as their use is why it was added).
2025-08-22 18:42:43 -04:00
Luke Parker
758d422595 Have <ed448::Point as Zeroize>::zeroize yield a well-defined value 2025-08-20 08:14:00 -04:00
Luke Parker
9841061b49 Add missing feature in substrate/client 2025-08-20 06:38:25 -04:00
Luke Parker
4122a0135f Fix dirty Cargo.lock 2025-08-20 05:20:47 -04:00
Luke Parker
b63ef32864 Smash Ciphersuite definitions into their own crates
Uses dalek-ff-group for Ed25519 and Ristretto. Uses minimal-ed448 for Ed448.
Adds ciphersuite-kp256 for Secp256k1 and P-256.
2025-08-20 05:12:36 -04:00
Luke Parker
8be03a8fc2 Fix dirty lockfile 2025-08-20 01:15:56 -04:00
Luke Parker
677a2e5749 Fix zeroization timeline in multiexp, cargo machete 2025-08-20 00:35:56 -04:00
Luke Parker
38bda1d586 dalek_ff_group::FieldElement: FromUniformBytes<64> 2025-08-20 00:23:39 -04:00
Luke Parker
2bc2ca6906 Implement FromUniformBytes<64> for dalek_ff_group::Scalar 2025-08-20 00:06:07 -04:00
Luke Parker
900a6612d7 Use std-shims to reduce flexible-transcript MSRV to 1.66
flexible-transcript already had a shim to support <1.66. This was irrelevant
since flexible-transcript had a MSRV of 1.73. Due to how clunky it was, it has
been removed despite theoretically enabling an even lower MSRV.
2025-08-19 23:43:26 -04:00
Luke Parker
17c1d5cd6b Tweak multiexp to Zeroize points when invoked in constant time, not just scalars 2025-08-19 22:28:59 -04:00
Luke Parker
8a1b56a928 Make the transcript dependency optional for schnorr-signatures
It's only required when aggregating.
2025-08-19 21:50:58 -04:00
Luke Parker
75964cf6da Place Schnorr signature aggregation behind a feature flag 2025-08-19 21:45:59 -04:00
Luke Parker
d407e35cee Fix Ciphersuite feature flagging 2025-08-19 21:42:25 -04:00
Luke Parker
c8ef044acb Version bump std-shims 2025-08-19 21:01:14 -04:00
Luke Parker
ddbc32de4d Update ciphersuite/dkg MSRVs 2025-08-19 18:20:19 -04:00
Luke Parker
e5ccfac19e Replace bespoke LazyLock/OnceLock with spin re-exports
Presumably notably slower on platforms with std, yet only when compiled with old
versions of Rust for which the option is this or no support anyways.
2025-08-19 18:10:33 -04:00
Luke Parker
432daae1d1 Polyfill extension traits for div_ceil and io::Error::other 2025-08-19 18:04:29 -04:00
Luke Parker
da3a85efe5 Only drop OnceLock value if initialized 2025-08-19 17:50:04 -04:00
Luke Parker
1e0240123d Shim LazyLock when before 1.70 2025-08-19 17:40:19 -04:00
Luke Parker
f6d4d1b084 Remove unused import, fix dirty Cargo.lock 2025-08-19 16:24:19 -04:00
Luke Parker
1b37dd2951 Shim std::sync::LazyLock for Rust < 1.80
Allows downgrading some crypto crates' MSRV to 1.79 as well.
2025-08-19 16:15:44 -04:00
Luke Parker
f32e0609f1 Add warning to dalek-ff-group 2025-08-19 15:25:40 -04:00
Luke Parker
ca85f9ba0c Remove the poorly-designed reduce_512 API
Unused and unpublished. This was only added in the FCMP++ branch as a quick fix
for performance reasons. Finding a better API is still a tricky question, but
this API is _bad_.
2025-08-19 15:24:49 -04:00
Luke Parker
cfd1cb3a37 Add FieldElement::wide_reduce to dalek-ff-group 2025-08-19 13:48:54 -04:00
Luke Parker
f2c13a0040 Expose Once within std-shims, bump spin to 0.9
This is technically a semver break due to bumping spin to 0.10, with the types
from spin being directly exposed. Long-term, we should not directly expose spin
but instead have our own types which are thin wrappers around spin (clearly
defining our API and allowing upgrading internals without breaking semver).
2025-08-19 13:36:01 -04:00
Luke Parker
961f46bc04 Add const fn to create a dalek-ff-group FieldElement 2025-08-19 13:17:39 -04:00
Luke Parker
2c4de3bab4 Bump version of ff-group-tests 2025-08-19 12:51:16 -04:00
Luke Parker
95c30720d2 Update how x coordinates are handled in bitcoin-serai 2025-08-18 14:52:29 -04:00
Luke Parker
ceede14f5c Fix misc compilation errors 2025-08-18 14:52:29 -04:00
Luke Parker
5e60ea9718 Don't offset nonces yet negate to achieve an even Y coordinate
Replaces an iterative loop with an immediate result, if action is necessary.
2025-08-18 14:52:29 -04:00
Luke Parker
153f6f2f2f Update to a monero-oxide patched to dkg 0.6 2025-08-18 14:52:29 -04:00
Luke Parker
104c0d4492 Rename ThresholdKeys::secret_share to ThresholdKeys::original_secret_share 2025-08-18 14:52:29 -04:00
Luke Parker
7c8f13ab28 Raise flexible-transcript requirement as required 2025-08-18 14:52:29 -04:00
Luke Parker
cb0deadf9a Version bump flexible-transcript 2025-08-18 14:52:29 -04:00
Luke Parker
cb489f9cef Other version bumps 2025-08-18 14:52:29 -04:00
Luke Parker
cc662cb591 Version bumps, add necessary version specifications 2025-08-18 14:52:29 -04:00
Luke Parker
a8b8844e3f Fix MSRV for simple-request 2025-08-18 14:52:29 -04:00
Luke Parker
82b543ef75 Fix clippy lint for ed448 on optional compilation path 2025-08-18 14:52:29 -04:00
Luke Parker
72e80c1a3d Update everything which uses dkg to the new APIs 2025-08-18 14:52:29 -04:00
Luke Parker
b6edc94bcd Add dealer key generation crate 2025-08-18 14:52:29 -04:00
Luke Parker
cfce2b26e2 Update READMEs, targeting an 80-character line limit 2025-08-18 14:52:29 -04:00
Luke Parker
e87bbcda64 Have modular-frost compile again 2025-08-18 14:52:29 -04:00
Luke Parker
9f84adf8b3 Smash dkg into dkg, dkg-[recovery, promote, musig, pedpop]
promote and pedpop require dleq, which don't support no-std. All three should
be moved outside the Serai repository, per #597, as none are planned for use
and worth covering under our BBP.
2025-08-18 14:52:29 -04:00
Luke Parker
3919cf55ae Extend modular-frost to test with scaled and offset keys
The transcript transcripted the group key _plus_ the offset, when it should've
only transcripted the group key as the declared group key already had the
offset applied. This has been fixed.
2025-08-18 14:52:29 -04:00
Luke Parker
38dd8cb191 Support taking arbitrary linear combinations of signing keys, not just additive offsets 2025-08-18 14:52:29 -04:00
Luke Parker
f2563d39cb Correct crypto MSRVs 2025-08-18 14:52:29 -04:00
Luke Parker
15a9cbef40 git checkout -f next ./crypto
Proceeds to remove the eVRF DKG after, only keeping what's relevant to this
branch alone.
2025-08-18 14:52:29 -04:00
Luke Parker
078d6e51e5 Re-install python3 after removal to solve unmet dependencies 2025-08-15 16:17:31 -04:00
Luke Parker
6c33e18745 Explicitly install python3 to fix build-dependencies 2025-08-15 16:14:10 -04:00
Luke Parker
b743c9a43e Update Rust version
This causes the Serai node to compile and run again.
2025-08-15 15:26:16 -04:00
Luke Parker
0c2f2979a9 Remove monero-serai, migrating to monero-oxide 2025-08-15 11:45:20 -04:00
Luke Parker
971951a1a6 Add overflow-checks even on release, per good practice 2025-08-15 10:56:28 -04:00
Luke Parker
92d9e908cb Version bumps for packages that needed to be published for monero-oxide 2025-08-15 10:56:10 -04:00
Luke Parker
a32b97be88 Move to wasm32v1-none from wasm32-unknown-unknown
Works towards fixing how the Substrate node Docker image no longer works.
2025-08-15 10:55:05 -04:00
Luke Parker
e3809b2ff1 Remove unnecessary edits to Docker config in an attempt to fix the CI 2025-08-12 01:27:28 -04:00
Luke Parker
fd2d8b4f0a Use Rust 1.89 when installing bins via cargo, version pin svm-rs
svm-rs just released a new version requiring 1.89 to compile. This process to
not install _any_ software with 1.85 to minimize how many toolchains we have in
use.
2025-08-12 01:27:28 -04:00
Luke Parker
bc81614894 Attempt Docker 24 again 2025-08-12 01:27:28 -04:00
Luke Parker
8df5aa2e2d Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-12 01:27:28 -04:00
Luke Parker
b000740470 Docker 25 since 24 doesn't have an active tag anymore 2025-08-12 01:27:28 -04:00
Luke Parker
b9f554111d Attempt to use Docker 24
Long-shot premised on an old forum post on how downgrading to Docker 24 solved
their instance of the error we face, though our conditions for it are
presumably different.
2025-08-12 01:27:28 -04:00
Luke Parker
354c408e3e Stop using an older version of Docker 2025-08-12 01:27:28 -04:00
Luke Parker
df3b60376a Restore Debian 12 Bookworm over Debian 11 Bullseye 2025-08-12 01:27:28 -04:00
Luke Parker
8d209c652e Add missing "-4" arguments to wget 2025-08-12 01:27:28 -04:00
Luke Parker
9ddad794b4 Use wget -4 for the same reason as the prior commit 2025-08-12 01:27:28 -04:00
Luke Parker
b934e484cc Replace busybox wget with wget on alpine to attempt to resolve DNS issues
See https://github.com/alpinelinux/docker-alpine/issues/155.
2025-08-12 01:27:28 -04:00
Luke Parker
f8aee9b3c8 Add overflow-checks = true recommandation to monero-serai 2025-08-12 01:27:28 -04:00
Luke Parker
f51d77d26a Fix tweaked Substrate connection code in serai-client tests 2025-08-12 01:27:28 -04:00
Luke Parker
0780deb643 Use three separate commands within the Bitcoin Dockerfile to download the release
Attempts to debug which is failing, as right now, the command as a whole is within the CI.
2025-08-12 01:27:28 -04:00
Luke Parker
75c38560f4 Bookworm -> Bullseye, except for the runtime 2025-08-12 01:27:28 -04:00
Luke Parker
9f1c5268a5 Attempt downgrading Docker from 27 to 26 2025-08-12 01:27:28 -04:00
Luke Parker
35b113768b Attempt downgrading docker from .28 to .27 2025-08-12 01:27:28 -04:00
Luke Parker
f2595c4939 Tweak how subtrate-client tests waits to connect to the Monero node 2025-08-12 01:27:28 -04:00
Luke Parker
8fcfa6d3d5 Add dedicated error for when amounts aren't representable within a u64
Fixes the issue where _inputs_ could still overflow u64::MAX and cause a panic.
2025-08-12 01:27:28 -04:00
Luke Parker
54c9d19726 Have docker install set host 2025-08-12 01:27:28 -04:00
Luke Parker
25324c3cd5 Add uidmap dependency for rootless Docker 2025-08-12 01:27:28 -04:00
Luke Parker
ecb7df85b0 if: runner.os == 'Linux', with single quotes 2025-08-12 01:27:28 -04:00
Luke Parker
68c7acdbef Attempt using rootless Docker in CI via the setup-docker-action
Restores using ubuntu-latest.

Basically, at some point in the last year the existing Docker e2e tests started
failing. I'm unclear if this is an issue with the OS, the docker packages, or
what. This just tries to find a solution.
2025-08-12 01:27:28 -04:00
Luke Parker
8b60feed92 Normalize FROM AS casing in Dockerfiles 2025-08-12 01:27:28 -04:00
Luke Parker
5c895efcd0 Downgrade tests requiring Docker from Ubuntu latest to Ubuntu 22.04
Attempts to resolve containers immediately exiting for some specific test runs.
2025-08-12 01:27:28 -04:00
Luke Parker
60e55656aa deny --hide-inclusion-graph 2025-08-12 01:27:28 -04:00
Luke Parker
9536282418 Update which deb archive to use within the runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8297d0679d Update substrate to one with a properly defined panic handler as of modern Rust 2025-08-12 01:27:28 -04:00
Luke Parker
d9f854b08a Attempt to fix install of clang within runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8aaf7f7dc6 Remove (presumably) unnecessary command to explicitly install python 2025-08-12 01:27:28 -04:00
Luke Parker
ce447558ac Update Rust versions used in orchestration 2025-08-12 01:27:28 -04:00
Luke Parker
fc850da30e Missing --allow-remove-essential flag 2025-08-12 01:27:28 -04:00
Luke Parker
d6f6cf1965 Attempt to force remove shim-signed to resolve 'unmet dependencies' issues with shim-signed 2025-08-12 01:27:28 -04:00
Luke Parker
4438b51881 Expand python packages explicitly installed 2025-08-12 01:27:28 -04:00
Luke Parker
6ae0d9fad7 Install cargo deny with Rust 1.85 and pin its version 2025-08-12 01:27:28 -04:00
Luke Parker
ad08b410a8 Pin cargo-machete to 0.8.0 to prevent other unexpected CI failures 2025-08-12 01:27:28 -04:00
Luke Parker
ec3cfd3ab7 Explicitly install python3 after removing various unnecessary packages 2025-08-12 01:27:28 -04:00
Luke Parker
01eb2daa0b Updated dated version of actions/cache 2025-08-12 01:27:28 -04:00
Luke Parker
885000f970 Add update, upgrade, fix-missing call to Ubuntu build dependencies
Attempts to fix a CI failure for some misconfiguration...
2025-08-12 01:27:28 -04:00
Luke Parker
4be506414b Install cargo machete with Rust 1.85
cargo machete now uses Rust's 2024 edition, and 1.85 was the first to ship it.
2025-08-12 01:27:28 -04:00
Luke Parker
1143d84e1d Remove msbuild from packages to remove when the CI starts
Apparently, it's no longer installed by default.
2025-08-12 01:27:28 -04:00
Luke Parker
336922101f Further harden decoy selection
It risked panicking if a non-monotonic distribution was returned. While the
provided RPC code won't return non-monotonic distributions, users are allowed
to define their own implementations and override the provided method. Said
implementations could omit this required check.
2025-08-12 01:27:28 -04:00
Luke Parker
ffa033d978 Clarify transcripting for Clsag::verify, Mlsag::verify, as with Clsag::sign 2025-08-12 01:27:28 -04:00
Luke Parker
23f986f57a Tweak the Substrate runtime as required by the Rust version bump performed 2025-08-12 01:27:28 -04:00
Luke Parker
bb726b58af Fix #654 2025-08-12 01:27:28 -04:00
Luke Parker
387615705c Fix #643 2025-08-12 01:27:28 -04:00
Luke Parker
c7f825a192 Rename Bulletproof::calculate_bp_clawback to Bulletproof::calculate_clawback 2025-08-12 01:27:28 -04:00
Luke Parker
d363b1c173 Fix #630 2025-08-12 01:27:28 -04:00
Luke Parker
d5077ae966 Respond to 13.1.1.
Uses Zeroizing for username/password in monero-simple-request-rpc.
2025-08-12 01:27:28 -04:00
Luke Parker
188fcc3cb4 Remove potentially-failing unchecked arithmetic operations for ones which error
In response to 9.13.3.

Requires a bump to Rust 1.82 to take advantage of `Option::is_none_or`.
2025-08-12 01:27:28 -04:00
Luke Parker
cbab9486c6 Clarify messages in non-debug assertions 2025-08-12 01:27:28 -04:00
Luke Parker
a5f4c450c6 Response to usage of unwrap in non-test code
This commit replaces all usage of `unwrap` with `expect` within
`networks/monero`, clarifying why the panic risked is unreachable. This commit
also replaces some uses of `unwrap` with solutions which are guaranteed not to
fail.

Notably, compilation on 128-bit systems is prevented, ensuring
`u64::try_from(usize::MAX)` will never panic at runtime.

Slight breaking changes are additionally included as necessary to massage out
some avoidable panics.
2025-08-12 01:27:28 -04:00
Luke Parker
4f65a0b147 Remove Clone from ClsagMultisigMask{Sender, Receiver}
This had ill-defined properties on Clone, as a mask could be sent multiple times
(unintended) and multiple algorithms may receive the same mask from a singular
sender.

Requires removing the Clone bound within modular-frost and expanding the test
helpers accordingly.

This was not raised in the audit yet upon independent review.
2025-08-12 01:27:28 -04:00
Luke Parker
feb18d64a7 Respond to 2 3
We now use `FrostError::InternalError` instead of a panic to represent the mask
not being set.
2025-08-12 01:27:28 -04:00
Luke Parker
cb1e6535cb Respond to 2 2 2025-08-12 01:27:28 -04:00
Luke Parker
6b8cf6653a Respond to 1.1 A2 (also cited as 2 1)
`read_vec` was unbounded. It now accepts an optional bound. In some places, we
are able to define and provide a bound (Bulletproofs(+)' `L` and `R` vectors).
In others, we cannot (the amount of inputs within a transaction, which is not
subject to any rule in the current consensus other than the total transaction
size limit). Usage of `None` in those locations preserves the existing
behavior.
2025-08-12 01:27:28 -04:00
Luke Parker
b426bfcfe8 Respond to 1.1 A1 2025-08-12 01:27:28 -04:00
Luke Parker
21ce50ecf7 Revert "Forward docker stderr to stdout in case stderr is being dropped for some reason"
This was intended for the monero-audit branch.
2025-08-10 20:53:09 -04:00
Luke Parker
a4ceb2e756 Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-10 20:50:12 -04:00
Luke Parker
eab5d9e64f Remove Mastodon link from README
Closes #662.
2025-07-12 03:29:21 -04:00
397 changed files with 7532 additions and 22623 deletions

View File

@@ -12,7 +12,7 @@ runs:
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -7,13 +7,20 @@ runs:
- name: Remove unused packages
shell: bash
run: |
sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
# Ensure the repositories are synced
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean
docker system prune -a --volumes
# Reinstall python3 as a general dependency of a functional operating system
sudo apt install -y python3 --fix-missing
if: runner.os == 'Linux'
- name: Remove unused packages
@@ -31,19 +38,48 @@ runs:
shell: bash
run: |
if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler
sudo apt install -y ca-certificates protobuf-compiler libclang-dev
elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf
brew install protobuf llvm
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi
- name: Install solc
shell: bash
run: |
cargo install svm-rs
svm install 0.8.25
svm use 0.8.25
cargo +1.89 install svm-rs --version =0.5.18
svm install 0.8.26
svm use 0.8.26
- name: Remove preinstalled Docker
shell: bash
run: |
docker system prune -a --volumes
sudo apt remove -y *docker*
# Install uidmap which will be required for the explicitly installed Docker
sudo apt install uidmap
if: runner.os == 'Linux'
- name: Update system dependencies
shell: bash
run: |
sudo apt update -y
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt clean
if: runner.os == 'Linux'
- name: Install rootless Docker
uses: docker/setup-docker-action@b60f85385d03ac8acfca6d9996982511d8620a19
with:
rootless: true
set-host: true
if: runner.os == 'Linux'
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -12,7 +12,7 @@ runs:
steps:
- name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -12,7 +12,7 @@ runs:
steps:
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -1 +1 @@
nightly-2024-07-01
nightly-2025-11-01

View File

@@ -32,9 +32,15 @@ jobs:
-p dalek-ff-group \
-p minimal-ed448 \
-p ciphersuite \
-p ciphersuite-kp256 \
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p dkg \
-p dkg-recovery \
-p dkg-dealer \
-p dkg-promote \
-p dkg-musig \
-p dkg-pedpop \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -12,13 +12,13 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
run: cargo +1.89 install cargo-deny --version =0.18.3
- name: Run cargo deny
run: cargo deny -L error --all-features check
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -11,7 +11,7 @@ jobs:
clippy:
strategy:
matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest]
os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-src -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -46,16 +46,16 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
run: cargo +1.89 install cargo-deny --version =0.18.4
- name: Run cargo deny
run: cargo deny -L error --all-features check
run: cargo deny -L error --all-features check --hide-inclusion-graph
fmt:
runs-on: ubuntu-latest
@@ -79,5 +79,5 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use
run: |
cargo install cargo-machete
cargo machete
cargo +1.89 install cargo-machete --version =0.8.0
cargo +1.89 machete

View File

@@ -1,72 +0,0 @@
name: Monero Tests
on:
push:
branches:
- develop
paths:
- "networks/monero/**"
- "processor/**"
pull_request:
paths:
- "networks/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-io --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-generators --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-primitives --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-mlsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-clsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-borromean --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-bulletproofs --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-address --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --lib
# Doesn't run unit tests with features as the tests workflow will
integration-tests:
runs-on: ubuntu-latest
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.3.4]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.3.4' }}
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --all-features --test '*'

View File

@@ -33,16 +33,3 @@ jobs:
-p alloy-simple-request-transport \
-p ethereum-serai \
-p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \
-p monero-primitives \
-p monero-mlsag \
-p monero-clsag \
-p monero-borromean \
-p monero-bulletproofs \
-p monero-serai \
-p monero-rpc \
-p monero-simple-request-rpc \
-p monero-address \
-p monero-wallet \
-p monero-serai-verify-chain

View File

@@ -46,16 +46,16 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Setup Ruby
uses: ruby/setup-ruby@v1
uses: ruby/setup-ruby@44511735964dcb71245e7e55f72539531f7bc0eb
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@v3
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b
- name: Build with Jekyll
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
@@ -69,12 +69,12 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Buld Rust docs
run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --all-features
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features
mv target/doc docs/_site/rust
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b
with:
path: "docs/_site/"
@@ -88,4 +88,4 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e

7
.gitignore vendored
View File

@@ -1,7 +1,14 @@
target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!./Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs
.vscode

6494
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,8 @@
[workspace]
resolver = "2"
members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/rocksdb",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
@@ -28,12 +21,18 @@ members = [
"crypto/dalek-ff-group",
"crypto/ed448",
"crypto/ciphersuite",
"crypto/ciphersuite/kp256",
"crypto/multiexp",
"crypto/schnorr",
"crypto/dleq",
"crypto/dkg",
"crypto/dkg/recovery",
"crypto/dkg/dealer",
"crypto/dkg/promote",
"crypto/dkg/musig",
"crypto/dkg/pedpop",
"crypto/frost",
"crypto/schnorrkel",
@@ -43,20 +42,6 @@ members = [
"networks/ethereum",
"networks/ethereum/relayer",
"networks/monero/io",
"networks/monero/generators",
"networks/monero/primitives",
"networks/monero/ringct/mlsag",
"networks/monero/ringct/clsag",
"networks/monero/ringct/borromean",
"networks/monero/ringct/bulletproofs",
"networks/monero",
"networks/monero/rpc",
"networks/monero/rpc/simple-request",
"networks/monero/wallet/address",
"networks/monero/wallet",
"networks/monero/verify-chain",
"message-queue",
"processor/messages",
@@ -126,29 +111,26 @@ minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 }
monero-serai = { opt-level = 3 }
monero-oxide = { opt-level = 3 }
[profile.release]
panic = "unwind"
overflow-checks = true
[patch.crates-io]
# Dependencies from monero-oxide which originate from within our own tree
std-shims = { path = "common/std-shims" }
simple-request = { path = "common/request" }
dalek-ff-group = { path = "crypto/dalek-ff-group" }
flexible-transcript = { path = "crypto/transcript" }
modular-frost = { path = "crypto/frost" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# 1.0.1 was yanked due to a breaking change (an extra field)
# 2.0 has fewer dependencies and still works within our tree
tiny-bip39 = { path = "patches/tiny-bip39" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
# These have `std` alternatives
matches = { path = "patches/matches" }
home = { path = "patches/home" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
@@ -159,7 +141,10 @@ option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
[workspace.lints.clippy]
uninlined_format_args = "allow" # TODO
unwrap_or_default = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
@@ -184,14 +169,14 @@ large_stack_arrays = "deny"
linkedlist = "deny"
macro_use_imports = "deny"
manual_instant_elapsed = "deny"
manual_let_else = "deny"
# TODO manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
needless_continue = "deny"
# TODO needless_continue = "deny"
needless_pass_by_value = "deny"
ptr_cast_constness = "deny"
range_minus_one = "deny"
@@ -199,8 +184,7 @@ range_plus_one = "deny"
redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny"
unchecked_time_subtraction = "deny"
unnecessary_box_returns = "deny"
unnecessary_join = "deny"
unnecessary_wraps = "deny"
@@ -208,3 +192,21 @@ unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
zero_sized_map_values = "deny"
# TODO: These were incurred when updating Rust as necessary for compilation, yet aren't being fixed
# at this time due to the impacts it'd have throughout the repository (when this isn't actively the
# primary branch, `next` is)
needless_continue = "allow"
needless_lifetimes = "allow"
useless_conversion = "allow"
empty_line_after_doc_comments = "allow"
manual_div_ceil = "allow"
manual_let_else = "allow"
unnecessary_map_or = "allow"
result_large_err = "allow"
unneeded_struct_pattern = "allow"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648
mismatched_lifetime_syntaxes = "allow"
unused_attributes = "allow"
unused_parens = "allow"

View File

@@ -59,7 +59,6 @@ issued at the discretion of the Immunefi program managers.
- [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/

View File

@@ -18,7 +18,7 @@ workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.21", default-features = false, features = ["zstd"], optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features]
parity-db = ["dep:parity-db"]

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
// Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> {

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-requ
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021"
rust-version = "1.64"
rust-version = "1.70"
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
use std::sync::Arc;

View File

@@ -1,13 +1,13 @@
[package]
name = "std-shims"
version = "0.1.1"
version = "0.1.4"
description = "A series of std shims to make alloc more feasible"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021"
rust-version = "1.70"
rust-version = "1.64"
[package.metadata.docs.rs]
all-features = true
@@ -17,7 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] }
rustversion = { version = "1", default-features = false }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "once", "lazy"] }
hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] }
[features]

View File

@@ -3,4 +3,9 @@
A crate which passes through to std when the default `std` feature is enabled,
yet provides a series of shims when it isn't.
`HashSet` and `HashMap` are provided via `hashbrown`.
No guarantee of one-to-one parity is provided. The shims provided aim to be sufficient for the
average case.
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization primitives are provided via
`spin` (avoiding a requirement on `critical-section`).
types are not guaranteed to be

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
@@ -11,3 +11,64 @@ pub mod io;
pub use alloc::vec;
pub use alloc::str;
pub use alloc::string;
pub mod prelude {
#[rustversion::before(1.73)]
#[doc(hidden)]
pub trait StdShimsDivCeil {
fn div_ceil(self, rhs: Self) -> Self;
}
#[rustversion::before(1.73)]
mod impl_divceil {
use super::StdShimsDivCeil;
impl StdShimsDivCeil for u8 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u16 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u32 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u64 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u128 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for usize {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
}
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
#[doc(hidden)]
pub trait StdShimsIoErrorOther {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>;
}
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
impl StdShimsIoErrorOther for std::io::Error {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>,
{
std::io::Error::new(std::io::ErrorKind::Other, error)
}
}
}

View File

@@ -25,7 +25,11 @@ mod mutex_shim {
}
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[cfg(feature = "std")]
pub use std::sync::LazyLock;
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::before(1.80)]
#[cfg(feature = "std")]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(feature = "std")]
pub use std::sync::LazyLock;

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.77.0"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))]
//! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation.

View File

@@ -25,8 +25,10 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] }
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] }
ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std", "aggregate"] }
dkg-musig = { path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" }
@@ -40,7 +42,7 @@ processor-messages = { package = "serai-processor-messages", path = "../processo
message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary = { package = "tributary-chain", path = "./tributary" }
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
@@ -55,8 +57,8 @@ libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp
[dev-dependencies]
tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] }
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-runtime = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
sp-runtime = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
[features]
longer-reattempts = []

View File

@@ -12,7 +12,7 @@ use tokio::{
use borsh::BorshSerialize;
use sp_application_crypto::RuntimePublic;
use serai_client::{
primitives::{ExternalNetworkId, Signature, EXTERNAL_NETWORKS},
primitives::{ExternalNetworkId, EXTERNAL_NETWORKS},
validator_sets::primitives::{ExternalValidatorSet, Session},
Serai, SeraiError, TemporalSerai,
};
@@ -164,7 +164,7 @@ impl<D: Db> CosignEvaluator<D> {
if !keys
.0
.verify(&cosign_block_msg(cosign.block_number, cosign.block), &Signature(cosign.signature))
.verify(&cosign_block_msg(cosign.block_number, cosign.block), &cosign.signature.into())
{
log::warn!("received cosigned block with an invalid signature");
return Ok(());

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use core::ops::Deref;
use std::{
sync::{OnceLock, Arc},
@@ -8,12 +10,13 @@ use std::{
use zeroize::{Zeroize, Zeroizing};
use rand_core::OsRng;
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{
ff::{Field, PrimeField},
GroupEncoding,
},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;
use frost::Participant;
@@ -268,12 +271,15 @@ async fn handle_processor_message<D: Db, P: P2p>(
coordinator::ProcessorMessage::SignedSlashReport { session, signature } => {
let set = ExternalValidatorSet { network, session: *session };
let signature: &[u8] = signature.as_ref();
let signature = serai_client::Signature(signature.try_into().unwrap());
let signature = <[u8; 64]>::try_from(signature).unwrap();
let signature: serai_client::Signature = signature.into();
let slashes = crate::tributary::SlashReport::get(&txn, set)
.expect("signed slash report despite not having slash report locally");
let slashes_pubs =
slashes.iter().map(|(address, points)| (Public(*address), *points)).collect::<Vec<_>>();
let slashes_pubs = slashes
.iter()
.map(|(address, points)| (Public::from(*address), *points))
.collect::<Vec<_>>();
let tx = serai_client::SeraiValidatorSets::report_slashes(
network,
@@ -283,7 +289,7 @@ async fn handle_processor_message<D: Db, P: P2p>(
.collect::<Vec<_>>()
.try_into()
.unwrap(),
signature.clone(),
signature,
);
loop {
@@ -500,7 +506,7 @@ async fn handle_processor_message<D: Db, P: P2p>(
&mut txn,
key,
spec,
&KeyPair(Public(substrate_key), network_key.try_into().unwrap()),
&KeyPair(Public::from(substrate_key), network_key.try_into().unwrap()),
id.attempt,
);

View File

@@ -14,7 +14,8 @@
use zeroize::Zeroizing;
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use borsh::{BorshSerialize, BorshDeserialize};

View File

@@ -6,7 +6,8 @@ use std::{
use zeroize::Zeroizing;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::{
coins::CoinsEvent,

View File

@@ -7,9 +7,10 @@ use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, OsRng};
use futures_util::{task::Poll, poll};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, GroupEncoding},
Ciphersuite, Ristretto,
Ciphersuite,
};
use sp_application_crypto::sr25519;
@@ -54,7 +55,9 @@ pub fn new_spec<R: RngCore + CryptoRng>(
let set_participants = keys
.iter()
.map(|key| (sr25519::Public((<Ristretto as Ciphersuite>::generator() * **key).to_bytes()), 1))
.map(|key| {
(sr25519::Public::from((<Ristretto as Ciphersuite>::generator() * **key).to_bytes()), 1)
})
.collect::<Vec<_>>();
let res = TributarySpec::new(serai_block, start_time, set, set_participants);

View File

@@ -4,7 +4,8 @@ use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::Participant;
use sp_runtime::traits::Verify;
@@ -315,7 +316,8 @@ async fn dkg_test() {
OsRng.fill_bytes(&mut substrate_key);
let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()];
OsRng.fill_bytes(&mut network_key);
let key_pair = KeyPair(serai_client::Public(substrate_key), network_key.try_into().unwrap());
let key_pair =
KeyPair(serai_client::Public::from(substrate_key), network_key.try_into().unwrap());
let mut txs = vec![];
for (i, key) in keys.iter().enumerate() {
@@ -360,9 +362,9 @@ async fn dkg_test() {
assert_eq!(self.key_pair, key_pair);
assert!(signature.verify(
&*serai_client::validator_sets::primitives::set_keys_message(&set, &[], &key_pair),
&serai_client::Public(
frost::dkg::musig::musig_key::<Ristretto>(
&serai_client::validator_sets::primitives::musig_context(set.into()),
&serai_client::Public::from(
dkg_musig::musig_key_vartime::<Ristretto>(
serai_client::validator_sets::primitives::musig_context(set.into()),
&self.spec.validators().into_iter().map(|(validator, _)| validator).collect::<Vec<_>>()
)
.unwrap()

View File

@@ -2,7 +2,8 @@ use core::fmt::Debug;
use rand_core::{RngCore, OsRng};
use ciphersuite::{group::Group, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::Group, Ciphersuite};
use scale::{Encode, Decode};
use serai_client::{

View File

@@ -3,7 +3,8 @@ use std::{sync::Arc, collections::HashSet};
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use tokio::{
sync::{mpsc, broadcast},

View File

@@ -3,7 +3,8 @@ use std::collections::HashMap;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::Participant;
use serai_client::validator_sets::primitives::{KeyPair, ExternalValidatorSet};

View File

@@ -4,11 +4,12 @@ use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::dkg::Participant;
use scale::{Encode, Decode};
use serai_client::{Signature, validator_sets::primitives::KeyPair};
use serai_client::validator_sets::primitives::KeyPair;
use tributary::{Signed, TransactionKind, TransactionTrait};
@@ -553,7 +554,7 @@ impl<
self.spec.set(),
removed.into_iter().map(|key| key.to_bytes().into()).collect(),
key_pair,
Signature(sig),
sig.into(),
)
.await;
}

View File

@@ -1,4 +1,5 @@
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::validator_sets::primitives::ExternalValidatorSet;

View File

@@ -3,7 +3,8 @@ use std::{sync::Arc, collections::HashSet};
use zeroize::Zeroizing;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use tokio::sync::broadcast;

View File

@@ -63,16 +63,13 @@ use rand_core::OsRng;
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::PrimeField, GroupEncoding},
Ciphersuite, Ristretto,
};
use frost::{
FrostError,
dkg::{Participant, musig::musig},
ThresholdKeys,
sign::*,
Ciphersuite,
};
use dkg_musig::musig;
use frost::{FrostError, dkg::Participant, ThresholdKeys, sign::*};
use frost_schnorrkel::Schnorrkel;
use scale::Encode;
@@ -119,7 +116,7 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
let algorithm = Schnorrkel::new(b"substrate");
let keys: ThresholdKeys<Ristretto> =
musig(&musig_context(self.spec.set().into()), self.key, participants)
musig(musig_context(self.spec.set().into()), self.key.clone(), participants)
.expect("signing for a set we aren't in/validator present multiple times")
.into();
@@ -298,7 +295,7 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, preprocesses).1;
let msg = set_keys_message(
&self.spec.set(),
&self.removed.iter().map(|key| Public(key.to_bytes())).collect::<Vec<_>>(),
&self.removed.iter().map(|key| Public::from(key.to_bytes())).collect::<Vec<_>>(),
key_pair,
);
self.signing_protocol().share_internal(&participants, preprocesses, &msg)

View File

@@ -3,7 +3,8 @@ use std::{io, collections::HashMap};
use transcript::{Transcript, RecommendedTranscript};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::Participant;
use scale::Encode;

View File

@@ -7,9 +7,10 @@ use rand_core::{RngCore, CryptoRng};
use blake2::{Digest, Blake2s256};
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, GroupEncoding},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;
use frost::Participant;

View File

@@ -27,7 +27,8 @@ rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] }
ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] }
dalek-ff-group = { path = "../../crypto/dalek-ff-group" }
ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,7 @@
use std::collections::{VecDeque, HashSet};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_db::{Get, DbTxn, Db};

View File

@@ -5,7 +5,8 @@ use async_trait::async_trait;
use zeroize::Zeroizing;
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use scale::Decode;
use futures_channel::mpsc::UnboundedReceiver;

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap;
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use serai_db::{DbTxn, Db};

View File

@@ -11,12 +11,13 @@ use rand_chacha::ChaCha12Rng;
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{
GroupEncoding,
ff::{Field, PrimeField},
},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::{
SchnorrSignature,

View File

@@ -4,7 +4,8 @@ use scale::{Encode, Decode, IoReader};
use blake2::{Digest, Blake2s256};
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use crate::{
transaction::{Transaction, TransactionKind, TransactionError},

View File

@@ -1,9 +1,11 @@
use std::{sync::Arc, io, collections::HashMap, fmt::Debug};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;

View File

@@ -10,7 +10,8 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use serai_db::{DbTxn, Db, MemDb};

View File

@@ -3,7 +3,8 @@ use std::{sync::Arc, collections::HashMap};
use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use tendermint::ext::Commit;

View File

@@ -6,9 +6,10 @@ use rand::{RngCore, CryptoRng, rngs::OsRng};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;

View File

@@ -2,7 +2,8 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{
ReadWrite,

View File

@@ -3,7 +3,8 @@ use std::sync::Arc;
use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng};
use ciphersuite::{Ristretto, Ciphersuite, group::ff::Field};
use dalek_ff_group::Ristretto;
use ciphersuite::{Ciphersuite, group::ff::Field};
use scale::Encode;

View File

@@ -6,9 +6,10 @@ use thiserror::Error;
use blake2::{Digest, Blake2b512};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{Group, GroupEncoding},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug;
use std::{

View File

@@ -1,13 +1,13 @@
[package]
name = "ciphersuite"
version = "0.4.1"
version = "0.4.2"
description = "Ciphersuites built around ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.74"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
@@ -24,22 +24,12 @@ rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
subtle = { version = "^2.4", default-features = false }
digest = { version = "0.10", default-features = false }
digest = { version = "0.10", default-features = false, features = ["core-api"] }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false }
sha2 = { version = "0.10", default-features = false, optional = true }
sha3 = { version = "0.10", default-features = false, optional = true }
ff = { version = "0.13", default-features = false, features = ["bits"] }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../dalek-ff-group", version = "0.4", default-features = false, optional = true }
elliptic-curve = { version = "0.13", default-features = false, features = ["hash2curve"], optional = true }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"], optional = true }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"], optional = true }
minimal-ed448 = { path = "../ed448", version = "0.4", default-features = false, optional = true }
[dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] }
@@ -48,7 +38,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../ff-group-tests" }
[features]
alloc = ["std-shims"]
alloc = ["std-shims", "ff/alloc"]
std = [
"std-shims/std",
@@ -59,27 +49,8 @@ std = [
"digest/std",
"transcript/std",
"sha2?/std",
"sha3?/std",
"ff/std",
"dalek-ff-group?/std",
"elliptic-curve?/std",
"p256?/std",
"k256?/std",
"minimal-ed448?/std",
]
dalek = ["sha2", "dalek-ff-group"]
ed25519 = ["dalek"]
ristretto = ["dalek"]
kp256 = ["sha2", "elliptic-curve"]
p256 = ["kp256", "dep:p256"]
secp256k1 = ["kp256", "k256"]
ed448 = ["sha3", "minimal-ed448"]
default = ["std"]

View File

@@ -21,6 +21,8 @@ Their `hash_to_F` is the
[IETF's hash to curve](https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html),
yet applied to their scalar field.
Please see the [`ciphersuite-kp256`](https://docs.rs/ciphersuite-kp256) crate for more info.
### Ed25519/Ristretto
Ed25519/Ristretto are offered via
@@ -33,6 +35,8 @@ the draft
[RFC-RISTRETTO](https://www.ietf.org/archive/id/draft-irtf-cfrg-ristretto255-decaf448-05.html).
The domain-separation tag is naively prefixed to the message.
Please see the [`dalek-ff-group`](https://docs.rs/dalek-ff-group) crate for more info.
### Ed448
Ed448 is offered via [minimal-ed448](https://crates.io/crates/minimal-ed448), an
@@ -42,3 +46,5 @@ to its prime-order subgroup.
Its `hash_to_F` is the wide reduction of SHAKE256, with a 114-byte output, as
used in [RFC-8032](https://www.rfc-editor.org/rfc/rfc8032). The
domain-separation tag is naively prefixed to the message.
Please see the [`minimal-ed448`](https://docs.rs/minimal-ed448) crate for more info.

View File

@@ -0,0 +1,55 @@
[package]
name = "ciphersuite-kp256"
version = "0.4.0"
description = "Ciphersuites built around ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite/kp256"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
sha2 = { version = "0.10", default-features = false }
elliptic-curve = { version = "0.13", default-features = false, features = ["hash2curve"] }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
ciphersuite = { path = "../", version = "0.4", default-features = false }
[dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../../ff-group-tests" }
[features]
alloc = ["ciphersuite/alloc"]
std = [
"rand_core/std",
"zeroize/std",
"sha2/std",
"elliptic-curve/std",
"p256/std",
"k256/std",
"ciphersuite/std",
]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2021-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,3 @@
# Ciphersuite {k, p}256
SECP256k1 and P-256 Ciphersuites around k256 and p256.

View File

@@ -1,16 +1,17 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
use zeroize::Zeroize;
use sha2::Sha256;
use group::ff::PrimeField;
use elliptic_curve::{
generic_array::GenericArray,
bigint::{NonZero, CheckedAdd, Encoding, U384},
hash2curve::{Expander, ExpandMsg, ExpandMsgXmd},
};
use crate::Ciphersuite;
use ciphersuite::{group::ff::PrimeField, Ciphersuite};
macro_rules! kp_curve {
(
@@ -107,12 +108,9 @@ fn test_oversize_dst<C: Ciphersuite>() {
/// Ciphersuite for Secp256k1.
///
/// hash_to_F is implemented via the IETF draft for hash to curve's hash_to_field (v16).
#[cfg(feature = "secp256k1")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Secp256k1;
#[cfg(feature = "secp256k1")]
kp_curve!("secp256k1", k256, Secp256k1, b"secp256k1");
#[cfg(feature = "secp256k1")]
#[test]
fn test_secp256k1() {
ff_group_tests::group::test_prime_group_bits::<_, k256::ProjectivePoint>(&mut rand_core::OsRng);
@@ -145,12 +143,9 @@ fn test_secp256k1() {
/// Ciphersuite for P-256.
///
/// hash_to_F is implemented via the IETF draft for hash to curve's hash_to_field (v16).
#[cfg(feature = "p256")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct P256;
#[cfg(feature = "p256")]
kp_curve!("p256", p256, P256, b"P-256");
#[cfg(feature = "p256")]
#[test]
fn test_p256() {
ff_group_tests::group::test_prime_group_bits::<_, p256::ProjectivePoint>(&mut rand_core::OsRng);

View File

@@ -2,7 +2,7 @@
Ciphersuites for elliptic curves premised on ff/group.
This library, except for the not recommended Ed448 ciphersuite, was
This library was
[audited by Cypher Stack in March 2023](https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf),
culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06).

View File

@@ -1,9 +1,12 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("lib.md")]
#![cfg_attr(not(feature = "std"), no_std)]
use core::fmt::Debug;
#[cfg(any(feature = "alloc", feature = "std"))]
#[allow(unused_imports)]
use std_shims::prelude::*;
#[cfg(any(feature = "alloc", feature = "std"))]
use std_shims::io::{self, Read};
use rand_core::{RngCore, CryptoRng};
@@ -23,25 +26,6 @@ use group::{
#[cfg(any(feature = "alloc", feature = "std"))]
use group::GroupEncoding;
#[cfg(feature = "dalek")]
mod dalek;
#[cfg(feature = "ristretto")]
pub use dalek::Ristretto;
#[cfg(feature = "ed25519")]
pub use dalek::Ed25519;
#[cfg(feature = "kp256")]
mod kp256;
#[cfg(feature = "secp256k1")]
pub use kp256::Secp256k1;
#[cfg(feature = "p256")]
pub use kp256::P256;
#[cfg(feature = "ed448")]
mod ed448;
#[cfg(feature = "ed448")]
pub use ed448::*;
/// Unified trait defining a ciphersuite around an elliptic curve.
pub trait Ciphersuite:
'static + Send + Sync + Clone + Copy + PartialEq + Eq + Debug + Zeroize
@@ -99,6 +83,9 @@ pub trait Ciphersuite:
}
/// Read a canonical point from something implementing std::io::Read.
///
/// The provided implementation is safe so long as `GroupEncoding::to_bytes` always returns a
/// canonical serialization.
#[cfg(any(feature = "alloc", feature = "std"))]
#[allow(non_snake_case)]
fn read_G<R: Read>(reader: &mut R) -> io::Result<Self::G> {

View File

@@ -1,13 +1,13 @@
[package]
name = "dalek-ff-group"
version = "0.4.1"
version = "0.4.4"
description = "ff/group bindings around curve25519-dalek"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dalek-ff-group"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["curve25519", "ed25519", "ristretto", "dalek", "group"]
edition = "2021"
rust-version = "1.66"
rust-version = "1.65"
[package.metadata.docs.rs]
all-features = true
@@ -25,18 +25,22 @@ subtle = { version = "^2.4", default-features = false }
rand_core = { version = "0.6", default-features = false }
digest = { version = "0.10", default-features = false }
sha2 = { version = "0.10", default-features = false }
ff = { version = "0.13", default-features = false, features = ["bits"] }
group = { version = "0.13", default-features = false }
ciphersuite = { path = "../ciphersuite", default-features = false }
crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] }
curve25519-dalek = { version = ">= 4.0, < 4.2", default-features = false, features = ["alloc", "zeroize", "digest", "group", "precomputed-tables"] }
[dev-dependencies]
hex = "0.4"
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { path = "../ff-group-tests" }
[features]
std = ["zeroize/std", "subtle/std", "rand_core/std", "digest/std"]
alloc = ["zeroize/alloc", "ciphersuite/alloc"]
std = ["alloc", "zeroize/std", "subtle/std", "rand_core/std", "digest/std", "sha2/std", "ciphersuite/std"]
default = ["std"]

View File

@@ -3,9 +3,9 @@ use zeroize::Zeroize;
use sha2::{Digest, Sha512};
use group::Group;
use dalek_ff_group::Scalar;
use crate::Scalar;
use crate::Ciphersuite;
use ciphersuite::Ciphersuite;
macro_rules! dalek_curve {
(
@@ -15,7 +15,7 @@ macro_rules! dalek_curve {
$Point: ident,
$ID: literal
) => {
use dalek_ff_group::$Point;
use crate::$Point;
impl Ciphersuite for $Ciphersuite {
type F = Scalar;
@@ -40,12 +40,9 @@ macro_rules! dalek_curve {
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[cfg(any(test, feature = "ristretto"))]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ristretto;
#[cfg(any(test, feature = "ristretto"))]
dalek_curve!("ristretto", Ristretto, RistrettoPoint, b"ristretto");
#[cfg(any(test, feature = "ristretto"))]
#[test]
fn test_ristretto() {
ff_group_tests::group::test_prime_group_bits::<_, RistrettoPoint>(&mut rand_core::OsRng);
@@ -71,12 +68,9 @@ fn test_ristretto() {
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[cfg(feature = "ed25519")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ed25519;
#[cfg(feature = "ed25519")]
dalek_curve!("ed25519", Ed25519, EdwardsPoint, b"edwards25519");
#[cfg(feature = "ed25519")]
#[test]
fn test_ed25519() {
ff_group_tests::group::test_prime_group_bits::<_, EdwardsPoint>(&mut rand_core::OsRng);

View File

@@ -17,7 +17,7 @@ use crypto_bigint::{
impl_modulus,
};
use group::ff::{Field, PrimeField, FieldBits, PrimeFieldBits};
use group::ff::{Field, PrimeField, FieldBits, PrimeFieldBits, FromUniformBytes};
use crate::{u8_from_bool, constant_time, math_op, math};
@@ -35,7 +35,8 @@ impl_modulus!(
type ResidueType = Residue<FieldModulus, { FieldModulus::LIMBS }>;
/// A constant-time implementation of the Ed25519 field.
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug, Zeroize)]
#[repr(transparent)]
pub struct FieldElement(ResidueType);
// Square root of -1.
@@ -92,7 +93,7 @@ impl Neg for FieldElement {
}
}
impl<'a> Neg for &'a FieldElement {
impl Neg for &FieldElement {
type Output = FieldElement;
fn neg(self) -> Self::Output {
(*self).neg()
@@ -216,10 +217,18 @@ impl PrimeFieldBits for FieldElement {
}
impl FieldElement {
/// Interpret the value as a little-endian integer, square it, and reduce it into a FieldElement.
pub fn from_square(value: [u8; 32]) -> FieldElement {
let value = U256::from_le_bytes(value);
FieldElement(reduce(U512::from(value.mul_wide(&value))))
/// Create a FieldElement from a `crypto_bigint::U256`.
///
/// This will reduce the `U256` by the modulus, into a member of the field.
pub const fn from_u256(u256: &U256) -> Self {
FieldElement(Residue::new(u256))
}
/// Create a `FieldElement` from the reduction of a 512-bit number.
///
/// The bytes are interpreted in little-endian format.
pub fn wide_reduce(value: [u8; 64]) -> Self {
FieldElement(reduce(U512::from_le_bytes(value)))
}
/// Perform an exponentiation.
@@ -297,6 +306,12 @@ impl FieldElement {
}
}
impl FromUniformBytes<64> for FieldElement {
fn from_uniform_bytes(bytes: &[u8; 64]) -> Self {
Self::wide_reduce(*bytes)
}
}
impl Sum<FieldElement> for FieldElement {
fn sum<I: Iterator<Item = FieldElement>>(iter: I) -> FieldElement {
let mut res = FieldElement::ZERO;

View File

@@ -1,5 +1,5 @@
#![allow(deprecated)]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![no_std] // Prevents writing new code, in what should be a simple wrapper, which requires std
#![doc = include_str!("../README.md")]
#![allow(clippy::redundant_closure_call)]
@@ -30,7 +30,7 @@ use dalek::{
pub use constants::{ED25519_BASEPOINT_TABLE, RISTRETTO_BASEPOINT_TABLE};
use group::{
ff::{Field, PrimeField, FieldBits, PrimeFieldBits},
ff::{Field, PrimeField, FieldBits, PrimeFieldBits, FromUniformBytes},
Group, GroupEncoding,
prime::PrimeGroup,
};
@@ -38,13 +38,24 @@ use group::{
mod field;
pub use field::FieldElement;
mod ciphersuite;
pub use crate::ciphersuite::{Ed25519, Ristretto};
// Use black_box when possible
#[rustversion::since(1.66)]
use core::hint::black_box;
#[rustversion::before(1.66)]
fn black_box<T>(val: T) -> T {
val
mod black_box {
pub(crate) fn black_box<T>(val: T) -> T {
#[allow(clippy::incompatible_msrv)]
core::hint::black_box(val)
}
}
#[rustversion::before(1.66)]
mod black_box {
pub(crate) fn black_box<T>(val: T) -> T {
val
}
}
use black_box::black_box;
fn u8_from_bool(bit_ref: &mut bool) -> u8 {
let bit_ref = black_box(bit_ref);
@@ -314,6 +325,12 @@ impl PrimeFieldBits for Scalar {
}
}
impl FromUniformBytes<64> for Scalar {
fn from_uniform_bytes(bytes: &[u8; 64]) -> Self {
Self::from_bytes_mod_order_wide(bytes)
}
}
impl Sum<Scalar> for Scalar {
fn sum<I: Iterator<Item = Scalar>>(iter: I) -> Scalar {
Self(DScalar::sum(iter))
@@ -351,7 +368,12 @@ macro_rules! dalek_group {
$BASEPOINT_POINT: ident,
$BASEPOINT_TABLE: ident
) => {
/// Wrapper around the dalek Point type. For Ed25519, this is restricted to the prime subgroup.
/// Wrapper around the dalek Point type.
///
/// All operations will be restricted to a prime-order subgroup (equivalent to the group itself
/// in the case of Ristretto). The exposure of the internal element does allow bypassing this
/// however, which may lead to undefined/computationally-unsafe behavior, and is entirely at
/// the user's risk.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct $Point(pub $DPoint);
deref_borrow!($Point, $DPoint);

View File

@@ -1,13 +1,13 @@
[package]
name = "dkg"
version = "0.5.1"
version = "0.6.1"
description = "Distributed key generation over ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.79"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
@@ -17,50 +17,25 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive", "alloc"] }
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
thiserror = { version = "2", default-features = false }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false }
borsh = { version = "1", default-features = false, features = ["derive", "de_strict_order"], optional = true }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false, features = ["recommended"] }
chacha20 = { version = "0.9", default-features = false, features = ["zeroize"] }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false }
multiexp = { path = "../multiexp", version = "0.4", default-features = false }
schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false }
dleq = { path = "../dleq", version = "^0.4.1", default-features = false }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
ciphersuite = { path = "../ciphersuite", default-features = false, features = ["ristretto"] }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] }
[features]
std = [
"thiserror",
"rand_core/std",
"thiserror/std",
"std-shims/std",
"borsh?/std",
"transcript/std",
"chacha20/std",
"ciphersuite/std",
"multiexp/std",
"multiexp/batch",
"schnorr/std",
"dleq/std",
"dleq/serialize"
]
borsh = ["dep:borsh"]
tests = ["rand_core/getrandom"]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2021-2023 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,16 +1,15 @@
# Distributed Key Generation
A collection of implementations of various distributed key generation protocols.
A crate implementing a type for keys, presumably the result of a distributed
key generation protocol, and utilities from there.
All included protocols resolve into the provided `Threshold` types, intended to
enable their modularity. Additional utilities around these types, such as
promotion from one generator to another, are also provided.
This crate used to host implementations of distributed key generation protocols
as well (hence the name). Those have been smashed into their own crates, such
as [`dkg-musig`](https://docs.rs/dkg-musig) and
[`dkg-pedpop`](https://docs.rs/dkg-pedpop).
Currently, the only included protocol is the two-round protocol from the
[FROST paper](https://eprint.iacr.org/2020/852).
This library was
[audited by Cypher Stack in March 2023](https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf),
culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06).
Any subsequent changes have not undergone auditing.
Before being smashed, this crate was [audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit [669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -0,0 +1,36 @@
[package]
name = "dkg-dealer"
version = "0.6.0"
description = "Produce dkg::ThresholdKeys with a dealer key generation"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/dealer"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
zeroize = { version = "^1.5", default-features = false }
rand_core = { version = "0.6", default-features = false }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }
dkg = { path = "../", version = "0.6", default-features = false }
[features]
std = [
"zeroize/std",
"rand_core/std",
"std-shims/std",
"ciphersuite/std",
"dkg/std",
]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,13 @@
# Distributed Key Generation - Dealer
This crate implements a dealer key generation protocol for the
[`dkg`](https://docs.rs/dkg) crate's types. This provides a single point of
failure when the key is being generated and is NOT recommended for use outside
of tests.
This crate was originally part of (in some form) the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit [669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -0,0 +1,68 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![no_std]
use core::ops::Deref;
use std_shims::{vec::Vec, collections::HashMap};
use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, CryptoRng};
use ciphersuite::{
group::ff::{Field, PrimeField},
Ciphersuite,
};
pub use dkg::*;
/// Create a key via a dealer key generation protocol.
pub fn key_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
threshold: u16,
participants: u16,
) -> Result<HashMap<Participant, ThresholdKeys<C>>, DkgError> {
let mut coefficients = Vec::with_capacity(usize::from(participants));
// `.max(1)` so we always generate the 0th coefficient which we'll share
for _ in 0 .. threshold.max(1) {
coefficients.push(Zeroizing::new(C::F::random(&mut *rng)));
}
fn polynomial<F: PrimeField + Zeroize>(
coefficients: &[Zeroizing<F>],
l: Participant,
) -> Zeroizing<F> {
let l = F::from(u64::from(u16::from(l)));
// This should never be reached since Participant is explicitly non-zero
assert!(l != F::ZERO, "zero participant passed to polynomial");
let mut share = Zeroizing::new(F::ZERO);
for (idx, coefficient) in coefficients.iter().rev().enumerate() {
*share += coefficient.deref();
if idx != (coefficients.len() - 1) {
*share *= l;
}
}
share
}
let group_key = C::generator() * coefficients[0].deref();
let mut secret_shares = HashMap::with_capacity(participants as usize);
let mut verification_shares = HashMap::with_capacity(participants as usize);
for i in 1 ..= participants {
let i = Participant::new(i).expect("non-zero u16 wasn't a valid Participant index");
let secret_share = polynomial(&coefficients, i);
secret_shares.insert(i, secret_share.clone());
verification_shares.insert(i, C::generator() * *secret_share);
}
let mut res = HashMap::with_capacity(participants as usize);
for (i, secret_share) in secret_shares {
let keys = ThresholdKeys::new(
ThresholdParams::new(threshold, participants, i)?,
Interpolation::Lagrange,
secret_share,
verification_shares.clone(),
)?;
debug_assert_eq!(keys.group_key(), group_key);
res.insert(i, keys);
}
Ok(res)
}

View File

@@ -0,0 +1,49 @@
[package]
name = "dkg-musig"
version = "0.6.0"
description = "The MuSig key aggregation protocol"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/musig"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.79"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "2", default-features = false }
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
multiexp = { path = "../../multiexp", version = "0.4", default-features = false }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }
dkg = { path = "../", version = "0.6", default-features = false }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
dalek-ff-group = { path = "../../dalek-ff-group" }
dkg-recovery = { path = "../recovery", default-features = false, features = ["std"] }
[features]
std = [
"thiserror/std",
"rand_core/std",
"std-shims/std",
"multiexp/std",
"ciphersuite/std",
"dkg/std",
]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,12 @@
# Distributed Key Generation - MuSig
This implements the MuSig key aggregation protocol for the
[`dkg`](https://docs.rs/dkg) crate's types.
This crate was originally part of (in some form) the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

162
crypto/dkg/musig/src/lib.rs Normal file
View File

@@ -0,0 +1,162 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
use core::ops::Deref;
use std_shims::{
vec,
vec::Vec,
collections::{HashSet, HashMap},
};
use zeroize::Zeroizing;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
pub use dkg::*;
#[cfg(test)]
mod tests;
/// Errors encountered when working with threshold keys.
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
pub enum MusigError<C: Ciphersuite> {
/// No keys were provided.
#[error("no keys provided")]
NoKeysProvided,
/// Too many keys were provided.
#[error("too many keys (allowed {max}, provided {provided})")]
TooManyKeysProvided {
/// The maximum amount of keys allowed.
max: u16,
/// The amount of keys provided.
provided: usize,
},
/// A participant was duplicated.
#[error("a participant was duplicated")]
DuplicatedParticipant(C::G),
/// Participating, yet our public key wasn't found in the list of keys.
#[error("private key's public key wasn't present in the list of public keys")]
NotPresent,
/// An error propagated from the underlying `dkg` crate.
#[error("error from dkg ({0})")]
DkgError(DkgError),
}
fn check_keys<C: Ciphersuite>(keys: &[C::G]) -> Result<u16, MusigError<C>> {
if keys.is_empty() {
Err(MusigError::NoKeysProvided)?;
}
let keys_len = u16::try_from(keys.len())
.map_err(|_| MusigError::TooManyKeysProvided { max: u16::MAX, provided: keys.len() })?;
let mut set = HashSet::with_capacity(keys.len());
for key in keys {
let bytes = key.to_bytes().as_ref().to_vec();
if !set.insert(bytes) {
Err(MusigError::DuplicatedParticipant(*key))?;
}
}
Ok(keys_len)
}
fn binding_factor_transcript<C: Ciphersuite>(
context: [u8; 32],
keys_len: u16,
keys: &[C::G],
) -> Vec<u8> {
debug_assert_eq!(usize::from(keys_len), keys.len());
let mut transcript = vec![];
transcript.extend(&context);
transcript.extend(keys_len.to_le_bytes());
for key in keys {
transcript.extend(key.to_bytes().as_ref());
}
transcript
}
fn binding_factor<C: Ciphersuite>(mut transcript: Vec<u8>, i: u16) -> C::F {
transcript.extend(i.to_le_bytes());
C::hash_to_F(b"dkg-musig", &transcript)
}
#[allow(clippy::type_complexity)]
fn musig_key_multiexp<C: Ciphersuite>(
context: [u8; 32],
keys: &[C::G],
) -> Result<Vec<(C::F, C::G)>, MusigError<C>> {
let keys_len = check_keys::<C>(keys)?;
let transcript = binding_factor_transcript::<C>(context, keys_len, keys);
let mut multiexp = Vec::with_capacity(keys.len());
for i in 1 ..= keys_len {
multiexp.push((binding_factor::<C>(transcript.clone(), i), keys[usize::from(i - 1)]));
}
Ok(multiexp)
}
/// The group key resulting from using this library's MuSig key aggregation.
///
/// This function executes in variable time and MUST NOT be used with secret data.
pub fn musig_key_vartime<C: Ciphersuite>(
context: [u8; 32],
keys: &[C::G],
) -> Result<C::G, MusigError<C>> {
Ok(multiexp::multiexp_vartime(&musig_key_multiexp(context, keys)?))
}
/// The group key resulting from using this library's MuSig key aggregation.
pub fn musig_key<C: Ciphersuite>(context: [u8; 32], keys: &[C::G]) -> Result<C::G, MusigError<C>> {
Ok(multiexp::multiexp(&musig_key_multiexp(context, keys)?))
}
/// A n-of-n non-interactive DKG which does not guarantee the usability of the resulting key.
pub fn musig<C: Ciphersuite>(
context: [u8; 32],
private_key: Zeroizing<C::F>,
keys: &[C::G],
) -> Result<ThresholdKeys<C>, MusigError<C>> {
let our_pub_key = C::generator() * private_key.deref();
let Some(our_i) = keys.iter().position(|key| *key == our_pub_key) else {
Err(MusigError::DkgError(DkgError::NotParticipating))?
};
let keys_len: u16 = check_keys::<C>(keys)?;
let params = ThresholdParams::new(
keys_len,
keys_len,
// The `+ 1` won't fail as `keys.len() <= u16::MAX`, so any index is `< u16::MAX`
Participant::new(
u16::try_from(our_i).expect("keys.len() <= u16::MAX yet index of keys > u16::MAX?") + 1,
)
.expect("i + 1 != 0"),
)
.map_err(MusigError::DkgError)?;
let transcript = binding_factor_transcript::<C>(context, keys_len, keys);
let mut binding_factors = Vec::with_capacity(keys.len());
let mut multiexp = Vec::with_capacity(keys.len());
let mut verification_shares = HashMap::with_capacity(keys.len());
for (i, key) in (1 ..= keys_len).zip(keys.iter().copied()) {
let binding_factor = binding_factor::<C>(transcript.clone(), i);
binding_factors.push(binding_factor);
multiexp.push((binding_factor, key));
let i = Participant::new(i).expect("non-zero u16 wasn't a valid Participant index?");
verification_shares.insert(i, key);
}
let group_key = multiexp::multiexp(&multiexp);
debug_assert_eq!(our_pub_key, verification_shares[&params.i()]);
debug_assert_eq!(musig_key_vartime::<C>(context, keys), Ok(group_key));
ThresholdKeys::new(
params,
Interpolation::Constant(binding_factors),
private_key,
verification_shares,
)
.map_err(MusigError::DkgError)
}

View File

@@ -0,0 +1,71 @@
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use dkg_recovery::recover_key;
use crate::*;
/// Tests MuSig key generation.
#[test]
pub fn test_musig() {
const PARTICIPANTS: u16 = 5;
let mut keys = vec![];
let mut pub_keys = vec![];
for _ in 0 .. PARTICIPANTS {
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
pub_keys.push(<Ristretto as Ciphersuite>::generator() * *key);
keys.push(key);
}
const CONTEXT: [u8; 32] = *b"MuSig Test ";
// Empty signing set
musig::<Ristretto>(CONTEXT, Zeroizing::new(<Ristretto as Ciphersuite>::F::ZERO), &[])
.unwrap_err();
// Signing set we're not part of
musig::<Ristretto>(
CONTEXT,
Zeroizing::new(<Ristretto as Ciphersuite>::F::ZERO),
&[<Ristretto as Ciphersuite>::generator()],
)
.unwrap_err();
// Test with n keys
{
let mut created_keys = HashMap::new();
let mut verification_shares = HashMap::new();
let group_key = musig_key::<Ristretto>(CONTEXT, &pub_keys).unwrap();
for (i, key) in keys.iter().enumerate() {
let these_keys = musig::<Ristretto>(CONTEXT, key.clone(), &pub_keys).unwrap();
assert_eq!(these_keys.params().t(), PARTICIPANTS);
assert_eq!(these_keys.params().n(), PARTICIPANTS);
assert_eq!(usize::from(u16::from(these_keys.params().i())), i + 1);
verification_shares.insert(
these_keys.params().i(),
<Ristretto as Ciphersuite>::generator() * **these_keys.original_secret_share(),
);
assert_eq!(these_keys.group_key(), group_key);
created_keys.insert(these_keys.params().i(), these_keys);
}
for keys in created_keys.values() {
for (l, verification_share) in &verification_shares {
assert_eq!(keys.original_verification_share(*l), *verification_share);
}
}
assert_eq!(
<Ristretto as Ciphersuite>::generator() *
*recover_key(&created_keys.values().cloned().collect::<Vec<_>>()).unwrap(),
group_key
);
}
}

View File

@@ -0,0 +1,37 @@
[package]
name = "dkg-pedpop"
version = "0.6.0"
description = "The PedPoP distributed key generation protocol"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/pedpop"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "2", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../../transcript", version = "^0.3.3", default-features = false, features = ["std", "recommended"] }
chacha20 = { version = "0.9", default-features = false, features = ["std", "zeroize"] }
multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["std"] }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../schnorr", version = "^0.5.1", default-features = false, features = ["std"] }
dleq = { path = "../../dleq", version = "^0.4.1", default-features = false, features = ["std", "serialize"] }
dkg = { path = "../", version = "0.6", default-features = false, features = ["std"] }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
dalek-ff-group = { path = "../../dalek-ff-group", default-features = false }

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,12 @@
# Distributed Key Generation - PedPoP
This implements the PedPoP distributed key generation protocol for the
[`dkg`](https://docs.rs/dkg) crate's types.
This crate was originally part of the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -21,7 +21,7 @@ use multiexp::BatchVerifier;
use schnorr::SchnorrSignature;
use dleq::DLEqProof;
use crate::{Participant, ThresholdParams};
use dkg::{Participant, ThresholdParams};
mod sealed {
use super::*;
@@ -69,7 +69,7 @@ impl<C: Ciphersuite, M: Message> EncryptionKeyMessage<C, M> {
buf
}
#[cfg(any(test, feature = "tests"))]
#[cfg(test)]
pub(crate) fn enc_key(&self) -> C::G {
self.enc_key
}
@@ -98,11 +98,11 @@ fn ecdh<C: Ciphersuite>(private: &Zeroizing<C::F>, public: C::G) -> Zeroizing<C:
// Each ecdh must be distinct. Reuse of an ecdh for multiple ciphers will cause the messages to be
// leaked.
fn cipher<C: Ciphersuite>(context: &str, ecdh: &Zeroizing<C::G>) -> ChaCha20 {
fn cipher<C: Ciphersuite>(context: [u8; 32], ecdh: &Zeroizing<C::G>) -> ChaCha20 {
// Ideally, we'd box this transcript with ZAlloc, yet that's only possible on nightly
// TODO: https://github.com/serai-dex/serai/issues/151
let mut transcript = RecommendedTranscript::new(b"DKG Encryption v0.2");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript.domain_separate(b"encryption_key");
@@ -134,7 +134,7 @@ fn cipher<C: Ciphersuite>(context: &str, ecdh: &Zeroizing<C::G>) -> ChaCha20 {
fn encrypt<R: RngCore + CryptoRng, C: Ciphersuite, E: Encryptable>(
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
to: C::G,
mut msg: Zeroizing<E>,
@@ -197,7 +197,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
pub(crate) fn invalidate_msg<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
) {
// Invalidate the message by specifying a new key/Schnorr PoP
@@ -219,7 +219,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
pub(crate) fn invalidate_share_serialization<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
to: C::G,
) {
@@ -243,7 +243,7 @@ impl<C: Ciphersuite, E: Encryptable> EncryptedMessage<C, E> {
pub(crate) fn invalidate_share_value<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
context: &str,
context: [u8; 32],
from: Participant,
to: C::G,
) {
@@ -300,14 +300,14 @@ impl<C: Ciphersuite> EncryptionKeyProof<C> {
// This still doesn't mean the DKG offers an authenticated channel. The per-message keys have no
// root of trust other than their existence in the assumed-to-exist external authenticated channel.
fn pop_challenge<C: Ciphersuite>(
context: &str,
context: [u8; 32],
nonce: C::G,
key: C::G,
sender: Participant,
msg: &[u8],
) -> C::F {
let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Proof of Possession v0.2");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript.domain_separate(b"proof_of_possession");
@@ -323,9 +323,9 @@ fn pop_challenge<C: Ciphersuite>(
C::hash_to_F(b"DKG-encryption-proof_of_possession", &transcript.challenge(b"schnorr"))
}
fn encryption_key_transcript(context: &str) -> RecommendedTranscript {
fn encryption_key_transcript(context: [u8; 32]) -> RecommendedTranscript {
let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Correctness Proof v0.2");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript
}
@@ -337,58 +337,17 @@ pub(crate) enum DecryptionError {
InvalidProof,
}
// A simple box for managing encryption.
#[derive(Clone)]
pub(crate) struct Encryption<C: Ciphersuite> {
context: String,
i: Option<Participant>,
enc_key: Zeroizing<C::F>,
enc_pub_key: C::G,
// A simple box for managing decryption.
#[derive(Clone, Debug)]
pub(crate) struct Decryption<C: Ciphersuite> {
context: [u8; 32],
enc_keys: HashMap<Participant, C::G>,
}
impl<C: Ciphersuite> fmt::Debug for Encryption<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("Encryption")
.field("context", &self.context)
.field("i", &self.i)
.field("enc_pub_key", &self.enc_pub_key)
.field("enc_keys", &self.enc_keys)
.finish_non_exhaustive()
impl<C: Ciphersuite> Decryption<C> {
pub(crate) fn new(context: [u8; 32]) -> Self {
Self { context, enc_keys: HashMap::new() }
}
}
impl<C: Ciphersuite> Zeroize for Encryption<C> {
fn zeroize(&mut self) {
self.enc_key.zeroize();
self.enc_pub_key.zeroize();
for (_, mut value) in self.enc_keys.drain() {
value.zeroize();
}
}
}
impl<C: Ciphersuite> Encryption<C> {
pub(crate) fn new<R: RngCore + CryptoRng>(
context: String,
i: Option<Participant>,
rng: &mut R,
) -> Self {
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
Self {
context,
i,
enc_pub_key: C::generator() * enc_key.deref(),
enc_key,
enc_keys: HashMap::new(),
}
}
pub(crate) fn registration<M: Message>(&self, msg: M) -> EncryptionKeyMessage<C, M> {
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
}
pub(crate) fn register<M: Message>(
&mut self,
participant: Participant,
@@ -402,13 +361,109 @@ impl<C: Ciphersuite> Encryption<C> {
msg.msg
}
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
// Returns None if the key was wrong.
pub(crate) fn decrypt_with_proof<E: Encryptable>(
&self,
from: Participant,
decryptor: Participant,
mut msg: EncryptedMessage<C, E>,
// There's no encryption key proof if the accusation is of an invalid signature
proof: Option<EncryptionKeyProof<C>>,
) -> Result<Zeroizing<E>, DecryptionError> {
if !msg.pop.verify(
msg.key,
pop_challenge::<C>(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
) {
Err(DecryptionError::InvalidSignature)?;
}
if let Some(proof) = proof {
// Verify this is the decryption key for this message
proof
.dleq
.verify(
&mut encryption_key_transcript(self.context),
&[C::generator(), msg.key],
&[self.enc_keys[&decryptor], *proof.key],
)
.map_err(|_| DecryptionError::InvalidProof)?;
cipher::<C>(self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut());
Ok(msg.msg)
} else {
Err(DecryptionError::InvalidProof)
}
}
}
// A simple box for managing encryption.
#[derive(Clone)]
pub(crate) struct Encryption<C: Ciphersuite> {
context: [u8; 32],
i: Participant,
enc_key: Zeroizing<C::F>,
enc_pub_key: C::G,
decryption: Decryption<C>,
}
impl<C: Ciphersuite> fmt::Debug for Encryption<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("Encryption")
.field("context", &self.context)
.field("i", &self.i)
.field("enc_pub_key", &self.enc_pub_key)
.field("decryption", &self.decryption)
.finish_non_exhaustive()
}
}
impl<C: Ciphersuite> Zeroize for Encryption<C> {
fn zeroize(&mut self) {
self.enc_key.zeroize();
self.enc_pub_key.zeroize();
for (_, mut value) in self.decryption.enc_keys.drain() {
value.zeroize();
}
}
}
impl<C: Ciphersuite> Encryption<C> {
pub(crate) fn new<R: RngCore + CryptoRng>(
context: [u8; 32],
i: Participant,
rng: &mut R,
) -> Self {
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
Self {
context,
i,
enc_pub_key: C::generator() * enc_key.deref(),
enc_key,
decryption: Decryption::new(context),
}
}
pub(crate) fn registration<M: Message>(&self, msg: M) -> EncryptionKeyMessage<C, M> {
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
}
pub(crate) fn register<M: Message>(
&mut self,
participant: Participant,
msg: EncryptionKeyMessage<C, M>,
) -> M {
self.decryption.register(participant, msg)
}
pub(crate) fn encrypt<R: RngCore + CryptoRng, E: Encryptable>(
&self,
rng: &mut R,
participant: Participant,
msg: Zeroizing<E>,
) -> EncryptedMessage<C, E> {
encrypt(rng, &self.context, self.i.unwrap(), self.enc_keys[&participant], msg)
encrypt(rng, self.context, self.i, self.decryption.enc_keys[&participant], msg)
}
pub(crate) fn decrypt<R: RngCore + CryptoRng, I: Copy + Zeroize, E: Encryptable>(
@@ -426,18 +481,18 @@ impl<C: Ciphersuite> Encryption<C> {
batch,
batch_id,
msg.key,
pop_challenge::<C>(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
pop_challenge::<C>(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
);
let key = ecdh::<C>(&self.enc_key, msg.key);
cipher::<C>(&self.context, &key).apply_keystream(msg.msg.as_mut().as_mut());
cipher::<C>(self.context, &key).apply_keystream(msg.msg.as_mut().as_mut());
(
msg.msg,
EncryptionKeyProof {
key,
dleq: DLEqProof::prove(
rng,
&mut encryption_key_transcript(&self.context),
&mut encryption_key_transcript(self.context),
&[C::generator(), msg.key],
&self.enc_key,
),
@@ -445,38 +500,7 @@ impl<C: Ciphersuite> Encryption<C> {
)
}
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
// Returns None if the key was wrong.
pub(crate) fn decrypt_with_proof<E: Encryptable>(
&self,
from: Participant,
decryptor: Participant,
mut msg: EncryptedMessage<C, E>,
// There's no encryption key proof if the accusation is of an invalid signature
proof: Option<EncryptionKeyProof<C>>,
) -> Result<Zeroizing<E>, DecryptionError> {
if !msg.pop.verify(
msg.key,
pop_challenge::<C>(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()),
) {
Err(DecryptionError::InvalidSignature)?;
}
if let Some(proof) = proof {
// Verify this is the decryption key for this message
proof
.dleq
.verify(
&mut encryption_key_transcript(&self.context),
&[C::generator(), msg.key],
&[self.enc_keys[&decryptor], *proof.key],
)
.map_err(|_| DecryptionError::InvalidProof)?;
cipher::<C>(&self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut());
Ok(msg.msg)
} else {
Err(DecryptionError::InvalidProof)
}
pub(crate) fn into_decryption(self) -> Decryption<C> {
self.decryption
}
}

View File

@@ -1,15 +1,20 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
// This crate requires `dleq` which doesn't support no-std via std-shims
// #![cfg_attr(not(feature = "std"), no_std)]
use core::{marker::PhantomData, ops::Deref, fmt};
use std::{
io::{self, Read, Write},
collections::HashMap,
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use rand_core::{RngCore, CryptoRng};
use transcript::{Transcript, RecommendedTranscript};
use multiexp::{multiexp_vartime, BatchVerifier};
use ciphersuite::{
group::{
ff::{Field, PrimeField},
@@ -17,29 +22,75 @@ use ciphersuite::{
},
Ciphersuite,
};
use multiexp::{multiexp_vartime, BatchVerifier};
use schnorr::SchnorrSignature;
use crate::{
Participant, DkgError, ThresholdParams, ThresholdCore, validate_map,
encryption::{
ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, EncryptionKeyProof,
DecryptionError,
},
};
pub use dkg::*;
type FrostError<C> = DkgError<EncryptionKeyProof<C>>;
mod encryption;
pub use encryption::*;
#[cfg(test)]
mod tests;
/// Errors possible during key generation.
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
pub enum PedPoPError<C: Ciphersuite> {
/// An incorrect amount of participants was provided.
#[error("incorrect amount of participants (expected {expected}, found {found})")]
IncorrectAmountOfParticipants { expected: usize, found: usize },
/// An invalid proof of knowledge was provided.
#[error("invalid proof of knowledge (participant {0})")]
InvalidCommitments(Participant),
/// An invalid DKG share was provided.
#[error("invalid share (participant {participant}, blame {blame})")]
InvalidShare { participant: Participant, blame: Option<EncryptionKeyProof<C>> },
/// A participant was missing.
#[error("missing participant {0}")]
MissingParticipant(Participant),
/// An error propagated from the underlying `dkg` crate.
#[error("error from dkg ({0})")]
DkgError(DkgError),
}
// Validate a map of values to have the expected included participants
fn validate_map<T, C: Ciphersuite>(
map: &HashMap<Participant, T>,
included: &[Participant],
ours: Participant,
) -> Result<(), PedPoPError<C>> {
if (map.len() + 1) != included.len() {
Err(PedPoPError::IncorrectAmountOfParticipants {
expected: included.len(),
found: map.len() + 1,
})?;
}
for included in included {
if *included == ours {
if map.contains_key(included) {
Err(PedPoPError::DkgError(DkgError::DuplicatedParticipant(*included)))?;
}
continue;
}
if !map.contains_key(included) {
Err(PedPoPError::MissingParticipant(*included))?;
}
}
Ok(())
}
#[allow(non_snake_case)]
fn challenge<C: Ciphersuite>(context: &str, l: Participant, R: &[u8], Am: &[u8]) -> C::F {
let mut transcript = RecommendedTranscript::new(b"DKG FROST v0.2");
fn challenge<C: Ciphersuite>(context: [u8; 32], l: Participant, R: &[u8], Am: &[u8]) -> C::F {
let mut transcript = RecommendedTranscript::new(b"DKG PedPoP v0.2");
transcript.domain_separate(b"schnorr_proof_of_knowledge");
transcript.append_message(b"context", context.as_bytes());
transcript.append_message(b"context", context);
transcript.append_message(b"participant", l.to_bytes());
transcript.append_message(b"nonce", R);
transcript.append_message(b"commitments", Am);
C::hash_to_F(b"DKG-FROST-proof_of_knowledge-0", &transcript.challenge(b"schnorr"))
C::hash_to_F(b"DKG-PedPoP-proof_of_knowledge-0", &transcript.challenge(b"schnorr"))
}
/// The commitments message, intended to be broadcast to all other parties.
@@ -86,19 +137,19 @@ impl<C: Ciphersuite> ReadWrite for Commitments<C> {
#[derive(Debug, Zeroize)]
pub struct KeyGenMachine<C: Ciphersuite> {
params: ThresholdParams,
context: String,
context: [u8; 32],
_curve: PhantomData<C>,
}
impl<C: Ciphersuite> KeyGenMachine<C> {
/// Create a new machine to generate a key.
///
/// The context string should be unique among multisigs.
pub fn new(params: ThresholdParams, context: String) -> KeyGenMachine<C> {
/// The context should be unique among multisigs.
pub fn new(params: ThresholdParams, context: [u8; 32]) -> KeyGenMachine<C> {
KeyGenMachine { params, context, _curve: PhantomData }
}
/// Start generating a key according to the FROST DKG spec.
/// Start generating a key according to the PedPoP DKG specification present in the FROST paper.
///
/// Returns a commitments message to be sent to all parties over an authenticated channel. If any
/// party submits multiple sets of commitments, they MUST be treated as malicious.
@@ -106,7 +157,7 @@ impl<C: Ciphersuite> KeyGenMachine<C> {
self,
rng: &mut R,
) -> (SecretShareMachine<C>, EncryptionKeyMessage<C, Commitments<C>>) {
let t = usize::from(self.params.t);
let t = usize::from(self.params.t());
let mut coefficients = Vec::with_capacity(t);
let mut commitments = Vec::with_capacity(t);
let mut cached_msg = vec![];
@@ -129,11 +180,11 @@ impl<C: Ciphersuite> KeyGenMachine<C> {
// There's no reason to spend the time and effort to make this deterministic besides a
// general obsession with canonicity and determinism though
r,
challenge::<C>(&self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
challenge::<C>(self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
);
// Additionally create an encryption mechanism to protect the secret shares
let encryption = Encryption::new(self.context.clone(), Some(self.params.i), rng);
let encryption = Encryption::new(self.context, self.params.i(), rng);
// Step 4: Broadcast
let msg =
@@ -225,7 +276,7 @@ impl<F: PrimeField> ReadWrite for SecretShare<F> {
#[derive(Zeroize)]
pub struct SecretShareMachine<C: Ciphersuite> {
params: ThresholdParams,
context: String,
context: [u8; 32],
coefficients: Vec<Zeroizing<C::F>>,
our_commitments: Vec<C::G>,
encryption: Encryption<C>,
@@ -250,21 +301,21 @@ impl<C: Ciphersuite> SecretShareMachine<C> {
&mut self,
rng: &mut R,
mut commitment_msgs: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
) -> Result<HashMap<Participant, Vec<C::G>>, FrostError<C>> {
) -> Result<HashMap<Participant, Vec<C::G>>, PedPoPError<C>> {
validate_map(
&commitment_msgs,
&(1 ..= self.params.n()).map(Participant).collect::<Vec<_>>(),
&self.params.all_participant_indexes().collect::<Vec<_>>(),
self.params.i(),
)?;
let mut batch = BatchVerifier::<Participant, C::G>::new(commitment_msgs.len());
let mut commitments = HashMap::new();
for l in (1 ..= self.params.n()).map(Participant) {
for l in self.params.all_participant_indexes() {
let Some(msg) = commitment_msgs.remove(&l) else { continue };
let mut msg = self.encryption.register(l, msg);
if msg.commitments.len() != self.params.t().into() {
Err(FrostError::InvalidCommitments(l))?;
Err(PedPoPError::InvalidCommitments(l))?;
}
// Step 5: Validate each proof of knowledge
@@ -274,15 +325,15 @@ impl<C: Ciphersuite> SecretShareMachine<C> {
&mut batch,
l,
msg.commitments[0],
challenge::<C>(&self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
challenge::<C>(self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
);
commitments.insert(l, msg.commitments.drain(..).collect::<Vec<_>>());
}
batch.verify_vartime_with_vartime_blame().map_err(FrostError::InvalidCommitments)?;
batch.verify_vartime_with_vartime_blame().map_err(PedPoPError::InvalidCommitments)?;
commitments.insert(self.params.i, self.our_commitments.drain(..).collect());
commitments.insert(self.params.i(), self.our_commitments.drain(..).collect());
Ok(commitments)
}
@@ -299,13 +350,13 @@ impl<C: Ciphersuite> SecretShareMachine<C> {
commitments: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
) -> Result<
(KeyMachine<C>, HashMap<Participant, EncryptedMessage<C, SecretShare<C::F>>>),
FrostError<C>,
PedPoPError<C>,
> {
let commitments = self.verify_r1(&mut *rng, commitments)?;
// Step 1: Generate secret shares for all other parties
let mut res = HashMap::new();
for l in (1 ..= self.params.n()).map(Participant) {
for l in self.params.all_participant_indexes() {
// Don't insert our own shares to the byte buffer which is meant to be sent around
// An app developer could accidentally send it. Best to keep this black boxed
if l == self.params.i() {
@@ -413,10 +464,10 @@ impl<C: Ciphersuite> KeyMachine<C> {
mut self,
rng: &mut R,
mut shares: HashMap<Participant, EncryptedMessage<C, SecretShare<C::F>>>,
) -> Result<BlameMachine<C>, FrostError<C>> {
) -> Result<BlameMachine<C>, PedPoPError<C>> {
validate_map(
&shares,
&(1 ..= self.params.n()).map(Participant).collect::<Vec<_>>(),
&self.params.all_participant_indexes().collect::<Vec<_>>(),
self.params.i(),
)?;
@@ -427,7 +478,7 @@ impl<C: Ciphersuite> KeyMachine<C> {
self.encryption.decrypt(rng, &mut batch, BatchId::Decryption(l), l, share_bytes);
let share =
Zeroizing::new(Option::<C::F>::from(C::F::from_repr(share_bytes.0)).ok_or_else(|| {
FrostError::InvalidShare { participant: l, blame: Some(blame.clone()) }
PedPoPError::InvalidShare { participant: l, blame: Some(blame.clone()) }
})?);
share_bytes.zeroize();
*self.secret += share.deref();
@@ -444,7 +495,7 @@ impl<C: Ciphersuite> KeyMachine<C> {
BatchId::Decryption(l) => (l, None),
BatchId::Share(l) => (l, Some(blames.remove(&l).unwrap())),
};
FrostError::InvalidShare { participant: l, blame }
PedPoPError::InvalidShare { participant: l, blame }
})?;
// Stripe commitments per t and sum them in advance. Calculating verification shares relies on
@@ -458,7 +509,7 @@ impl<C: Ciphersuite> KeyMachine<C> {
// Calculate each user's verification share
let mut verification_shares = HashMap::new();
for i in (1 ..= self.params.n()).map(Participant) {
for i in self.params.all_participant_indexes() {
verification_shares.insert(
i,
if i == self.params.i() {
@@ -472,13 +523,11 @@ impl<C: Ciphersuite> KeyMachine<C> {
let KeyMachine { commitments, encryption, params, secret } = self;
Ok(BlameMachine {
commitments,
encryption,
result: Some(ThresholdCore {
params,
secret_share: secret,
group_key: stripes[0],
verification_shares,
}),
encryption: encryption.into_decryption(),
result: Some(
ThresholdKeys::new(params, Interpolation::Lagrange, secret, verification_shares)
.map_err(PedPoPError::DkgError)?,
),
})
}
}
@@ -486,8 +535,8 @@ impl<C: Ciphersuite> KeyMachine<C> {
/// A machine capable of handling blame proofs.
pub struct BlameMachine<C: Ciphersuite> {
commitments: HashMap<Participant, Vec<C::G>>,
encryption: Encryption<C>,
result: Option<ThresholdCore<C>>,
encryption: Decryption<C>,
result: Option<ThresholdKeys<C>>,
}
impl<C: Ciphersuite> fmt::Debug for BlameMachine<C> {
@@ -505,7 +554,6 @@ impl<C: Ciphersuite> Zeroize for BlameMachine<C> {
for commitments in self.commitments.values_mut() {
commitments.zeroize();
}
self.encryption.zeroize();
self.result.zeroize();
}
}
@@ -520,7 +568,7 @@ impl<C: Ciphersuite> BlameMachine<C> {
/// territory of consensus protocols. This library does not handle that nor does it provide any
/// tooling to do so. This function is solely intended to force users to acknowledge they're
/// completing the protocol, not processing any blame.
pub fn complete(self) -> ThresholdCore<C> {
pub fn complete(self) -> ThresholdKeys<C> {
self.result.unwrap()
}
@@ -598,17 +646,16 @@ impl<C: Ciphersuite> AdditionalBlameMachine<C> {
/// authenticated as having come from the supposed party and verified as valid. Usage of invalid
/// commitments is considered undefined behavior, and may cause everything from inaccurate blame
/// to panics.
pub fn new<R: RngCore + CryptoRng>(
rng: &mut R,
context: String,
pub fn new(
context: [u8; 32],
n: u16,
mut commitment_msgs: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
) -> Result<Self, FrostError<C>> {
) -> Result<Self, PedPoPError<C>> {
let mut commitments = HashMap::new();
let mut encryption = Encryption::new(context, None, rng);
let mut encryption = Decryption::new(context);
for i in 1 ..= n {
let i = Participant::new(i).unwrap();
let Some(msg) = commitment_msgs.remove(&i) else { Err(DkgError::MissingParticipant(i))? };
let Some(msg) = commitment_msgs.remove(&i) else { Err(PedPoPError::MissingParticipant(i))? };
commitments.insert(i, encryption.register(i, msg).commitments);
}
Ok(AdditionalBlameMachine(BlameMachine { commitments, encryption, result: None }))

View File

@@ -0,0 +1,346 @@
use std::collections::HashMap;
use rand_core::{RngCore, CryptoRng, OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use crate::*;
const THRESHOLD: u16 = 3;
const PARTICIPANTS: u16 = 5;
/// Clone a map without a specific value.
fn clone_without<K: Clone + core::cmp::Eq + core::hash::Hash, V: Clone>(
map: &HashMap<K, V>,
without: &K,
) -> HashMap<K, V> {
let mut res = map.clone();
res.remove(without).unwrap();
res
}
type PedPoPEncryptedMessage<C> = EncryptedMessage<C, SecretShare<<C as Ciphersuite>::F>>;
type PedPoPSecretShares<C> = HashMap<Participant, PedPoPEncryptedMessage<C>>;
const CONTEXT: [u8; 32] = *b"DKG Test Key Generation ";
// Commit, then return commitment messages, enc keys, and shares
#[allow(clippy::type_complexity)]
fn commit_enc_keys_and_shares<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
) -> (
HashMap<Participant, KeyMachine<C>>,
HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
HashMap<Participant, C::G>,
HashMap<Participant, PedPoPSecretShares<C>>,
) {
let mut machines = HashMap::new();
let mut commitments = HashMap::new();
let mut enc_keys = HashMap::new();
for i in (1 ..= PARTICIPANTS).map(|i| Participant::new(i).unwrap()) {
let params = ThresholdParams::new(THRESHOLD, PARTICIPANTS, i).unwrap();
let machine = KeyGenMachine::<C>::new(params, CONTEXT);
let (machine, these_commitments) = machine.generate_coefficients(rng);
machines.insert(i, machine);
commitments.insert(
i,
EncryptionKeyMessage::read::<&[u8]>(&mut these_commitments.serialize().as_ref(), params)
.unwrap(),
);
enc_keys.insert(i, commitments[&i].enc_key());
}
let mut secret_shares = HashMap::new();
let machines = machines
.drain()
.map(|(l, machine)| {
let (machine, mut shares) =
machine.generate_secret_shares(rng, clone_without(&commitments, &l)).unwrap();
let shares = shares
.drain()
.map(|(l, share)| {
(
l,
EncryptedMessage::read::<&[u8]>(
&mut share.serialize().as_ref(),
// Only t/n actually matters, so hardcode i to 1 here
ThresholdParams::new(THRESHOLD, PARTICIPANTS, Participant::new(1).unwrap()).unwrap(),
)
.unwrap(),
)
})
.collect::<HashMap<_, _>>();
secret_shares.insert(l, shares);
(l, machine)
})
.collect::<HashMap<_, _>>();
(machines, commitments, enc_keys, secret_shares)
}
fn generate_secret_shares<C: Ciphersuite>(
shares: &HashMap<Participant, PedPoPSecretShares<C>>,
recipient: Participant,
) -> PedPoPSecretShares<C> {
let mut our_secret_shares = HashMap::new();
for (i, shares) in shares {
if recipient == *i {
continue;
}
our_secret_shares.insert(*i, shares[&recipient].clone());
}
our_secret_shares
}
/// Fully perform the PedPoP key generation algorithm.
fn pedpop_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
) -> HashMap<Participant, ThresholdKeys<C>> {
let (mut machines, _, _, secret_shares) = commit_enc_keys_and_shares::<_, C>(rng);
let mut verification_shares = None;
let mut group_key = None;
machines
.drain()
.map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let these_keys = machine.calculate_share(rng, our_secret_shares).unwrap().complete();
// Verify the verification_shares are agreed upon
if verification_shares.is_none() {
verification_shares = Some(
these_keys
.params()
.all_participant_indexes()
.map(|i| (i, these_keys.original_verification_share(i)))
.collect::<HashMap<_, _>>(),
);
}
assert_eq!(
verification_shares.as_ref().unwrap(),
&these_keys
.params()
.all_participant_indexes()
.map(|i| (i, these_keys.original_verification_share(i)))
.collect::<HashMap<_, _>>()
);
// Verify the group keys are agreed upon
if group_key.is_none() {
group_key = Some(these_keys.group_key());
}
assert_eq!(group_key.unwrap(), these_keys.group_key());
(i, these_keys)
})
.collect::<HashMap<_, _>>()
}
const ONE: Participant = Participant::new(1).unwrap();
const TWO: Participant = Participant::new(2).unwrap();
#[test]
fn test_pedpop() {
let _ = core::hint::black_box(pedpop_gen::<_, Ristretto>(&mut OsRng));
}
fn test_blame(
commitment_msgs: &HashMap<Participant, EncryptionKeyMessage<Ristretto, Commitments<Ristretto>>>,
machines: Vec<BlameMachine<Ristretto>>,
msg: &PedPoPEncryptedMessage<Ristretto>,
blame: &Option<EncryptionKeyProof<Ristretto>>,
) {
for machine in machines {
let (additional, blamed) = machine.blame(ONE, TWO, msg.clone(), blame.clone());
assert_eq!(blamed, ONE);
// Verify additional blame also works
assert_eq!(additional.blame(ONE, TWO, msg.clone(), blame.clone()), ONE);
// Verify machines constructed with AdditionalBlameMachine::new work
assert_eq!(
AdditionalBlameMachine::new(CONTEXT, PARTICIPANTS, commitment_msgs.clone()).unwrap().blame(
ONE,
TWO,
msg.clone(),
blame.clone()
),
ONE,
);
}
}
// TODO: Write a macro which expands to the following
#[test]
fn invalid_encryption_pop_blame() {
let (mut machines, commitment_msgs, _, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
// Mutate the PoP of the encrypted message from 1 to 2
secret_shares.get_mut(&ONE).unwrap().get_mut(&TWO).unwrap().invalidate_pop();
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == TWO {
assert_eq!(
machine.err(),
Some(PedPoPError::InvalidShare { participant: ONE, blame: None })
);
// Explicitly declare we have a blame object, which happens to be None since invalid PoP
// is self-explainable
blame = Some(None);
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
test_blame(&commitment_msgs, machines, &secret_shares[&ONE][&TWO].clone(), &blame.unwrap());
}
#[test]
fn invalid_ecdh_blame() {
let (mut machines, commitment_msgs, _, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
// Mutate the share to trigger a blame event
// Mutates from 2 to 1, as 1 is expected to end up malicious for test_blame to pass
// While here, 2 is malicious, this is so 1 creates the blame proof
// We then malleate 1's blame proof, so 1 ends up malicious
// Doesn't simply invalidate the PoP as that won't have a blame statement
// By mutating the encrypted data, we do ensure a blame statement is created
secret_shares
.get_mut(&TWO)
.unwrap()
.get_mut(&ONE)
.unwrap()
.invalidate_msg(&mut OsRng, CONTEXT, TWO);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == ONE {
blame = Some(match machine.err() {
Some(PedPoPError::InvalidShare { participant: TWO, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
blame.as_mut().unwrap().as_mut().unwrap().invalidate_key();
test_blame(&commitment_msgs, machines, &secret_shares[&TWO][&ONE].clone(), &blame.unwrap());
}
// This should be largely equivalent to the prior test
#[test]
fn invalid_dleq_blame() {
let (mut machines, commitment_msgs, _, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
secret_shares
.get_mut(&TWO)
.unwrap()
.get_mut(&ONE)
.unwrap()
.invalidate_msg(&mut OsRng, CONTEXT, TWO);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == ONE {
blame = Some(match machine.err() {
Some(PedPoPError::InvalidShare { participant: TWO, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
blame.as_mut().unwrap().as_mut().unwrap().invalidate_dleq();
test_blame(&commitment_msgs, machines, &secret_shares[&TWO][&ONE].clone(), &blame.unwrap());
}
#[test]
fn invalid_share_serialization_blame() {
let (mut machines, commitment_msgs, enc_keys, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
secret_shares.get_mut(&ONE).unwrap().get_mut(&TWO).unwrap().invalidate_share_serialization(
&mut OsRng,
CONTEXT,
ONE,
enc_keys[&TWO],
);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == TWO {
blame = Some(match machine.err() {
Some(PedPoPError::InvalidShare { participant: ONE, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
test_blame(&commitment_msgs, machines, &secret_shares[&ONE][&TWO].clone(), &blame.unwrap());
}
#[test]
fn invalid_share_value_blame() {
let (mut machines, commitment_msgs, enc_keys, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
secret_shares.get_mut(&ONE).unwrap().get_mut(&TWO).unwrap().invalidate_share_value(
&mut OsRng,
CONTEXT,
ONE,
enc_keys[&TWO],
);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == TWO {
blame = Some(match machine.err() {
Some(PedPoPError::InvalidShare { participant: ONE, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
test_blame(&commitment_msgs, machines, &secret_shares[&ONE][&TWO].clone(), &blame.unwrap());
}

View File

@@ -0,0 +1,34 @@
[package]
name = "dkg-promote"
version = "0.6.1"
description = "Promotions for keys from the dkg crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/promote"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "2", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../../transcript", version = "^0.3.2", default-features = false, features = ["std", "recommended"] }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false, features = ["std"] }
dleq = { path = "../../dleq", version = "^0.4.1", default-features = false, features = ["std", "serialize"] }
dkg = { path = "../", version = "0.6.1", default-features = false, features = ["std"] }
[dev-dependencies]
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
dalek-ff-group = { path = "../../dalek-ff-group" }
dkg-recovery = { path = "../recovery", default-features = false, features = ["std"] }

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,13 @@
# Distributed Key Generation - Promote
This crate implements 'promotions' for keys from the
[`dkg`](https://docs.rs/dkg) crate. A promotion takes a set of keys and maps it
to a different `Ciphersuite`.
This crate was originally part of the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -1,7 +1,11 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
// This crate requires `dleq` which doesn't support no-std via std-shims
// #![cfg_attr(not(feature = "std"), no_std)]
use core::{marker::PhantomData, ops::Deref};
use std::{
io::{self, Read, Write},
sync::Arc,
collections::HashMap,
};
@@ -12,11 +16,37 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite};
use transcript::{Transcript, RecommendedTranscript};
use dleq::DLEqProof;
use crate::{Participant, DkgError, ThresholdCore, ThresholdKeys, validate_map};
pub use dkg::*;
/// Promote a set of keys to another Ciphersuite definition.
pub trait CiphersuitePromote<C2: Ciphersuite> {
fn promote(self) -> ThresholdKeys<C2>;
#[cfg(test)]
mod tests;
/// Errors encountered when promoting keys.
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
pub enum PromotionError {
/// Invalid participant identifier.
#[error("invalid participant (1 <= participant <= {n}, yet participant is {participant})")]
InvalidParticipant {
/// The total amount of participants.
n: u16,
/// The specified participant.
participant: Participant,
},
/// An incorrect amount of participants was specified.
#[error("incorrect amount of participants. {t} <= amount <= {n}, yet amount is {amount}")]
IncorrectAmountOfParticipants {
/// The threshold required.
t: u16,
/// The total amount of participants.
n: u16,
/// The amount of participants specified.
amount: usize,
},
/// Participant provided an invalid proof.
#[error("invalid proof {0}")]
InvalidProof(Participant),
}
fn transcript<G: GroupEncoding>(key: &G, i: Participant) -> RecommendedTranscript {
@@ -65,20 +95,21 @@ pub struct GeneratorPromotion<C1: Ciphersuite, C2: Ciphersuite> {
}
impl<C1: Ciphersuite, C2: Ciphersuite<F = C1::F, G = C1::G>> GeneratorPromotion<C1, C2> {
/// Begin promoting keys from one generator to another. Returns a proof this share was properly
/// promoted.
/// Begin promoting keys from one generator to another.
///
/// Returns a proof this share was properly promoted.
pub fn promote<R: RngCore + CryptoRng>(
rng: &mut R,
base: ThresholdKeys<C1>,
) -> (GeneratorPromotion<C1, C2>, GeneratorProof<C1>) {
// Do a DLEqProof for the new generator
let proof = GeneratorProof {
share: C2::generator() * base.secret_share().deref(),
share: C2::generator() * base.original_secret_share().deref(),
proof: DLEqProof::prove(
rng,
&mut transcript(&base.core.group_key(), base.params().i),
&mut transcript(&base.original_group_key(), base.params().i()),
&[C1::generator(), C2::generator()],
base.secret_share(),
base.original_secret_share(),
),
};
@@ -89,34 +120,49 @@ impl<C1: Ciphersuite, C2: Ciphersuite<F = C1::F, G = C1::G>> GeneratorPromotion<
pub fn complete(
self,
proofs: &HashMap<Participant, GeneratorProof<C1>>,
) -> Result<ThresholdKeys<C2>, DkgError<()>> {
) -> Result<ThresholdKeys<C2>, PromotionError> {
let params = self.base.params();
validate_map(proofs, &(1 ..= params.n).map(Participant).collect::<Vec<_>>(), params.i)?;
let original_shares = self.base.verification_shares();
if proofs.len() != (usize::from(params.n()) - 1) {
Err(PromotionError::IncorrectAmountOfParticipants {
t: params.n(),
n: params.n(),
amount: proofs.len() + 1,
})?;
}
for i in proofs.keys().copied() {
if u16::from(i) > params.n() {
Err(PromotionError::InvalidParticipant { n: params.n(), participant: i })?;
}
}
let mut verification_shares = HashMap::new();
verification_shares.insert(params.i, self.proof.share);
for (i, proof) in proofs {
let i = *i;
verification_shares.insert(params.i(), self.proof.share);
for i in 1 ..= params.n() {
let i = Participant::new(i).unwrap();
if i == params.i() {
continue;
}
let proof = proofs.get(&i).unwrap();
proof
.proof
.verify(
&mut transcript(&self.base.core.group_key(), i),
&mut transcript(&self.base.original_group_key(), i),
&[C1::generator(), C2::generator()],
&[original_shares[&i], proof.share],
&[self.base.original_verification_share(i), proof.share],
)
.map_err(|_| DkgError::InvalidCommitments(i))?;
.map_err(|_| PromotionError::InvalidProof(i))?;
verification_shares.insert(i, proof.share);
}
Ok(ThresholdKeys {
core: Arc::new(ThresholdCore::new(
Ok(
ThresholdKeys::new(
params,
self.base.secret_share().clone(),
self.base.interpolation().clone(),
self.base.original_secret_share().clone(),
verification_shares,
)),
offset: None,
})
)
.unwrap(),
)
}
}

View File

@@ -0,0 +1,113 @@
use core::marker::PhantomData;
use std::collections::HashMap;
use zeroize::{Zeroize, Zeroizing};
use rand_core::OsRng;
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite,
};
use dkg::*;
use dkg_recovery::recover_key;
use crate::{GeneratorPromotion, GeneratorProof};
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
struct AltGenerator<C: Ciphersuite> {
_curve: PhantomData<C>,
}
impl<C: Ciphersuite> Ciphersuite for AltGenerator<C> {
type F = C::F;
type G = C::G;
type H = C::H;
const ID: &'static [u8] = b"Alternate Ciphersuite";
fn generator() -> Self::G {
C::G::generator() * <C as Ciphersuite>::hash_to_F(b"DKG Promotion Test", b"generator")
}
fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F {
<C as Ciphersuite>::hash_to_F(dst, data)
}
}
/// Clone a map without a specific value.
pub fn clone_without<K: Clone + core::cmp::Eq + core::hash::Hash, V: Clone>(
map: &HashMap<K, V>,
without: &K,
) -> HashMap<K, V> {
let mut res = map.clone();
res.remove(without).unwrap();
res
}
// Test promotion of threshold keys to another generator
#[test]
fn test_generator_promotion() {
// Generate a set of `ThresholdKeys`
const PARTICIPANTS: u16 = 5;
let keys: [ThresholdKeys<_>; PARTICIPANTS as usize] = {
let shares: [<Ristretto as Ciphersuite>::F; PARTICIPANTS as usize] =
core::array::from_fn(|_| <Ristretto as Ciphersuite>::F::random(&mut OsRng));
let verification_shares = (0 .. PARTICIPANTS)
.map(|i| {
(
Participant::new(i + 1).unwrap(),
<Ristretto as Ciphersuite>::generator() * shares[usize::from(i)],
)
})
.collect::<HashMap<_, _>>();
core::array::from_fn(|i| {
ThresholdKeys::new(
ThresholdParams::new(
PARTICIPANTS,
PARTICIPANTS,
Participant::new(u16::try_from(i + 1).unwrap()).unwrap(),
)
.unwrap(),
Interpolation::Constant(vec![<Ristretto as Ciphersuite>::F::ONE; PARTICIPANTS as usize]),
Zeroizing::new(shares[i]),
verification_shares.clone(),
)
.unwrap()
})
};
// Perform the promotion
let mut promotions = HashMap::new();
let mut proofs = HashMap::new();
for keys in &keys {
let i = keys.params().i();
let (promotion, proof) =
GeneratorPromotion::<_, AltGenerator<Ristretto>>::promote(&mut OsRng, keys.clone());
promotions.insert(i, promotion);
proofs.insert(
i,
GeneratorProof::<Ristretto>::read::<&[u8]>(&mut proof.serialize().as_ref()).unwrap(),
);
}
// Complete the promotion, and verify it worked
let new_group_key = AltGenerator::<Ristretto>::generator() * *recover_key(&keys).unwrap();
for (i, promoting) in promotions.drain() {
let promoted = promoting.complete(&clone_without(&proofs, &i)).unwrap();
assert_eq!(keys[usize::from(u16::from(i) - 1)].params(), promoted.params());
assert_eq!(
keys[usize::from(u16::from(i) - 1)].original_secret_share(),
promoted.original_secret_share()
);
assert_eq!(new_group_key, promoted.group_key());
for l in 0 .. PARTICIPANTS {
let verification_share =
promoted.original_verification_share(Participant::new(l + 1).unwrap());
assert_eq!(
AltGenerator::<Ristretto>::generator() * **keys[usize::from(l)].original_secret_share(),
verification_share
);
}
}
}

View File

@@ -0,0 +1,34 @@
[package]
name = "dkg-recovery"
version = "0.6.0"
description = "Recover a secret-shared key from a collection of dkg::ThresholdKeys"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/recovery"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
zeroize = { version = "^1.5", default-features = false }
thiserror = { version = "2", default-features = false }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }
dkg = { path = "../", version = "0.6", default-features = false }
[features]
std = [
"zeroize/std",
"thiserror/std",
"ciphersuite/std",
"dkg/std",
]
default = ["std"]

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,14 @@
# Distributed Key Generation - Recovery
A utility function to recover a key from its secret shares.
Keys likely SHOULD NOT ever be recovered, making this primarily intended for
testing purposes. Instead, the shares of the key should be used to produce
shares for the desired action, allowing using the key while never
reconstructing it.
Before being smashed, this crate was [audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit [669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -0,0 +1,85 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![no_std]
use core::ops::{Deref, DerefMut};
extern crate alloc;
use alloc::vec::Vec;
use zeroize::Zeroizing;
use ciphersuite::Ciphersuite;
pub use dkg::*;
/// Errors encountered when recovering a secret-shared key from a collection of
/// `dkg::ThresholdKeys`.
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
pub enum RecoveryError {
/// No keys were provided.
#[error("no keys provided")]
NoKeysProvided,
/// Not enough keys were provided.
#[error("not enough keys provided (threshold required {required}, provided {provided})")]
NotEnoughKeysProvided { required: u16, provided: usize },
/// The keys had inconsistent parameters.
#[error("keys had inconsistent parameters")]
InconsistentParameters,
/// The keys are from distinct secret-sharing sessions or otherwise corrupt.
#[error("recovery failed")]
Failure,
/// An error propagated from the underlying `dkg` crate.
#[error("error from dkg ({0})")]
DkgError(DkgError),
}
/// Recover a shared secret from a collection of `dkg::ThresholdKeys`.
pub fn recover_key<C: Ciphersuite>(
keys: &[ThresholdKeys<C>],
) -> Result<Zeroizing<C::F>, RecoveryError> {
let included = keys.iter().map(|keys| keys.params().i()).collect::<Vec<_>>();
let keys_len = keys.len();
let mut keys = keys.iter();
let first_keys = keys.next().ok_or(RecoveryError::NoKeysProvided)?;
{
let t = first_keys.params().t();
if keys_len < usize::from(t) {
Err(RecoveryError::NotEnoughKeysProvided { required: t, provided: keys_len })?;
}
}
{
let first_params = (
first_keys.params().t(),
first_keys.params().n(),
first_keys.group_key(),
first_keys.current_scalar(),
first_keys.current_offset(),
);
for keys in keys.clone() {
let params = (
keys.params().t(),
keys.params().n(),
keys.group_key(),
keys.current_scalar(),
keys.current_offset(),
);
if params != first_params {
Err(RecoveryError::InconsistentParameters)?;
}
}
}
let mut res: Zeroizing<_> =
first_keys.view(included.clone()).map_err(RecoveryError::DkgError)?.secret_share().clone();
for keys in keys {
*res.deref_mut() +=
keys.view(included.clone()).map_err(RecoveryError::DkgError)?.secret_share().deref();
}
if (C::generator() * res.deref()) != first_keys.group_key() {
Err(RecoveryError::Failure)?;
}
Ok(res)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,141 +0,0 @@
#[cfg(feature = "std")]
use core::ops::Deref;
use std_shims::{vec, vec::Vec, collections::HashSet};
#[cfg(feature = "std")]
use std_shims::collections::HashMap;
#[cfg(feature = "std")]
use zeroize::Zeroizing;
#[cfg(feature = "std")]
use ciphersuite::group::ff::Field;
use ciphersuite::{
group::{Group, GroupEncoding},
Ciphersuite,
};
use crate::DkgError;
#[cfg(feature = "std")]
use crate::{Participant, ThresholdParams, ThresholdCore, lagrange};
fn check_keys<C: Ciphersuite>(keys: &[C::G]) -> Result<u16, DkgError<()>> {
if keys.is_empty() {
Err(DkgError::InvalidSigningSet)?;
}
// Too many signers
let keys_len = u16::try_from(keys.len()).map_err(|_| DkgError::InvalidSigningSet)?;
// Duplicated public keys
if keys.iter().map(|key| key.to_bytes().as_ref().to_vec()).collect::<HashSet<_>>().len() !=
keys.len()
{
Err(DkgError::InvalidSigningSet)?;
}
Ok(keys_len)
}
// This function panics if called with keys whose length exceed 2**16.
// This is fine since it's internal and all calls occur after calling check_keys, which does check
// the keys' length.
fn binding_factor_transcript<C: Ciphersuite>(
context: &[u8],
keys: &[C::G],
) -> Result<Vec<u8>, DkgError<()>> {
let mut transcript = vec![];
transcript.push(u8::try_from(context.len()).map_err(|_| DkgError::InvalidSigningSet)?);
transcript.extend(context);
transcript.extend(u16::try_from(keys.len()).unwrap().to_le_bytes());
for key in keys {
transcript.extend(key.to_bytes().as_ref());
}
Ok(transcript)
}
fn binding_factor<C: Ciphersuite>(mut transcript: Vec<u8>, i: u16) -> C::F {
transcript.extend(i.to_le_bytes());
C::hash_to_F(b"musig", &transcript)
}
/// The group key resulting from using this library's MuSig key gen.
///
/// This function will return an error if the context is longer than 255 bytes.
///
/// Creating an aggregate key with a list containing duplicated public keys will return an error.
pub fn musig_key<C: Ciphersuite>(context: &[u8], keys: &[C::G]) -> Result<C::G, DkgError<()>> {
let keys_len = check_keys::<C>(keys)?;
let transcript = binding_factor_transcript::<C>(context, keys)?;
let mut res = C::G::identity();
for i in 1 ..= keys_len {
res += keys[usize::from(i - 1)] * binding_factor::<C>(transcript.clone(), i);
}
Ok(res)
}
/// A n-of-n non-interactive DKG which does not guarantee the usability of the resulting key.
///
/// Creating an aggregate key with a list containing duplicated public keys returns an error.
#[cfg(feature = "std")]
pub fn musig<C: Ciphersuite>(
context: &[u8],
private_key: &Zeroizing<C::F>,
keys: &[C::G],
) -> Result<ThresholdCore<C>, DkgError<()>> {
let keys_len = check_keys::<C>(keys)?;
let our_pub_key = C::generator() * private_key.deref();
let Some(pos) = keys.iter().position(|key| *key == our_pub_key) else {
// Not present in signing set
Err(DkgError::InvalidSigningSet)?
};
let params = ThresholdParams::new(
keys_len,
keys_len,
// These errors shouldn't be possible, as pos is bounded to len - 1
// Since len is prior guaranteed to be within u16::MAX, pos + 1 must also be
Participant::new((pos + 1).try_into().map_err(|_| DkgError::InvalidSigningSet)?)
.ok_or(DkgError::InvalidSigningSet)?,
)?;
// Calculate the binding factor per-key
let transcript = binding_factor_transcript::<C>(context, keys)?;
let mut binding = Vec::with_capacity(keys.len());
for i in 1 ..= keys_len {
binding.push(binding_factor::<C>(transcript.clone(), i));
}
// Multiply our private key by our binding factor
let mut secret_share = private_key.clone();
*secret_share *= binding[pos];
// Calculate verification shares
let mut verification_shares = HashMap::new();
// When this library offers a ThresholdView for a specific signing set, it applies the lagrange
// factor
// Since this is a n-of-n scheme, there's only one possible signing set, and one possible
// lagrange factor
// In the name of simplicity, we define the group key as the sum of all bound keys
// Accordingly, the secret share must be multiplied by the inverse of the lagrange factor, along
// with all verification shares
// This is less performant than simply defining the group key as the sum of all post-lagrange
// bound keys, yet the simplicity is preferred
let included = (1 ..= keys_len)
// This error also shouldn't be possible, for the same reasons as documented above
.map(|l| Participant::new(l).ok_or(DkgError::InvalidSigningSet))
.collect::<Result<Vec<_>, _>>()?;
let mut group_key = C::G::identity();
for (l, p) in included.iter().enumerate() {
let bound = keys[l] * binding[l];
group_key += bound;
let lagrange_inv = lagrange::<C::F>(*p, &included).invert().unwrap();
if params.i() == *p {
*secret_share *= lagrange_inv;
}
verification_shares.insert(*p, bound * lagrange_inv);
}
debug_assert_eq!(C::generator() * secret_share.deref(), verification_shares[&params.i()]);
debug_assert_eq!(musig_key::<C>(context, keys).unwrap(), group_key);
Ok(ThresholdCore { params, secret_share, group_key, verification_shares })
}

View File

@@ -1,101 +0,0 @@
use core::ops::Deref;
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{Participant, ThresholdCore, ThresholdKeys, lagrange, musig::musig as musig_fn};
mod musig;
pub use musig::test_musig;
/// FROST key generation testing utility.
pub mod pedpop;
use pedpop::pedpop_gen;
// Promotion test.
mod promote;
use promote::test_generator_promotion;
/// Constant amount of participants to use when testing.
pub const PARTICIPANTS: u16 = 5;
/// Constant threshold of participants to use when testing.
pub const THRESHOLD: u16 = ((PARTICIPANTS * 2) / 3) + 1;
/// Clone a map without a specific value.
pub fn clone_without<K: Clone + core::cmp::Eq + core::hash::Hash, V: Clone>(
map: &HashMap<K, V>,
without: &K,
) -> HashMap<K, V> {
let mut res = map.clone();
res.remove(without).unwrap();
res
}
/// Recover the secret from a collection of keys.
///
/// This will panic if no keys, an insufficient amount of keys, or the wrong keys are provided.
pub fn recover_key<C: Ciphersuite>(keys: &HashMap<Participant, ThresholdKeys<C>>) -> C::F {
let first = keys.values().next().expect("no keys provided");
assert!(keys.len() >= first.params().t().into(), "not enough keys provided");
let included = keys.keys().copied().collect::<Vec<_>>();
let group_private = keys.iter().fold(C::F::ZERO, |accum, (i, keys)| {
accum + (lagrange::<C::F>(*i, &included) * keys.secret_share().deref())
});
assert_eq!(C::generator() * group_private, first.group_key(), "failed to recover keys");
group_private
}
/// Generate threshold keys for tests.
pub fn key_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
) -> HashMap<Participant, ThresholdKeys<C>> {
let res = pedpop_gen(rng)
.drain()
.map(|(i, core)| {
assert_eq!(
&ThresholdCore::<C>::read::<&[u8]>(&mut core.serialize().as_ref()).unwrap(),
&core
);
(i, ThresholdKeys::new(core))
})
.collect();
assert_eq!(C::generator() * recover_key(&res), res[&Participant(1)].group_key());
res
}
/// Generate MuSig keys for tests.
pub fn musig_key_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
) -> HashMap<Participant, ThresholdKeys<C>> {
let mut keys = vec![];
let mut pub_keys = vec![];
for _ in 0 .. PARTICIPANTS {
let key = Zeroizing::new(C::F::random(&mut *rng));
pub_keys.push(C::generator() * *key);
keys.push(key);
}
let mut res = HashMap::new();
for key in keys {
let these_keys = musig_fn::<C>(b"Test MuSig Key Gen", &key, &pub_keys).unwrap();
res.insert(these_keys.params().i(), ThresholdKeys::new(these_keys));
}
assert_eq!(C::generator() * recover_key(&res), res[&Participant(1)].group_key());
res
}
/// Run the test suite on a ciphersuite.
pub fn test_ciphersuite<R: RngCore + CryptoRng, C: Ciphersuite>(rng: &mut R) {
key_gen::<_, C>(rng);
test_generator_promotion::<_, C>(rng);
}
#[test]
fn test_with_ristretto() {
test_ciphersuite::<_, ciphersuite::Ristretto>(&mut rand_core::OsRng);
}

View File

@@ -1,61 +0,0 @@
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{
ThresholdKeys,
musig::{musig_key, musig},
tests::{PARTICIPANTS, recover_key},
};
/// Tests MuSig key generation.
pub fn test_musig<R: RngCore + CryptoRng, C: Ciphersuite>(rng: &mut R) {
let mut keys = vec![];
let mut pub_keys = vec![];
for _ in 0 .. PARTICIPANTS {
let key = Zeroizing::new(C::F::random(&mut *rng));
pub_keys.push(C::generator() * *key);
keys.push(key);
}
const CONTEXT: &[u8] = b"MuSig Test";
// Empty signing set
musig::<C>(CONTEXT, &Zeroizing::new(C::F::ZERO), &[]).unwrap_err();
// Signing set we're not part of
musig::<C>(CONTEXT, &Zeroizing::new(C::F::ZERO), &[C::generator()]).unwrap_err();
// Test with n keys
{
let mut created_keys = HashMap::new();
let mut verification_shares = HashMap::new();
let group_key = musig_key::<C>(CONTEXT, &pub_keys).unwrap();
for (i, key) in keys.iter().enumerate() {
let these_keys = musig::<C>(CONTEXT, key, &pub_keys).unwrap();
assert_eq!(these_keys.params().t(), PARTICIPANTS);
assert_eq!(these_keys.params().n(), PARTICIPANTS);
assert_eq!(usize::from(these_keys.params().i().0), i + 1);
verification_shares
.insert(these_keys.params().i(), C::generator() * **these_keys.secret_share());
assert_eq!(these_keys.group_key(), group_key);
created_keys.insert(these_keys.params().i(), ThresholdKeys::new(these_keys));
}
for keys in created_keys.values() {
assert_eq!(keys.verification_shares(), verification_shares);
}
assert_eq!(C::generator() * recover_key(&created_keys), group_key);
}
}
#[test]
fn musig_literal() {
test_musig::<_, ciphersuite::Ristretto>(&mut rand_core::OsRng)
}

View File

@@ -1,333 +0,0 @@
use std::collections::HashMap;
use rand_core::{RngCore, CryptoRng};
use ciphersuite::Ciphersuite;
use crate::{
Participant, ThresholdParams, ThresholdCore,
pedpop::{Commitments, KeyGenMachine, SecretShare, KeyMachine},
encryption::{EncryptionKeyMessage, EncryptedMessage},
tests::{THRESHOLD, PARTICIPANTS, clone_without},
};
type PedPoPEncryptedMessage<C> = EncryptedMessage<C, SecretShare<<C as Ciphersuite>::F>>;
type PedPoPSecretShares<C> = HashMap<Participant, PedPoPEncryptedMessage<C>>;
const CONTEXT: &str = "DKG Test Key Generation";
// Commit, then return commitment messages, enc keys, and shares
#[allow(clippy::type_complexity)]
fn commit_enc_keys_and_shares<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
) -> (
HashMap<Participant, KeyMachine<C>>,
HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
HashMap<Participant, C::G>,
HashMap<Participant, PedPoPSecretShares<C>>,
) {
let mut machines = HashMap::new();
let mut commitments = HashMap::new();
let mut enc_keys = HashMap::new();
for i in (1 ..= PARTICIPANTS).map(Participant) {
let params = ThresholdParams::new(THRESHOLD, PARTICIPANTS, i).unwrap();
let machine = KeyGenMachine::<C>::new(params, CONTEXT.to_string());
let (machine, these_commitments) = machine.generate_coefficients(rng);
machines.insert(i, machine);
commitments.insert(
i,
EncryptionKeyMessage::read::<&[u8]>(&mut these_commitments.serialize().as_ref(), params)
.unwrap(),
);
enc_keys.insert(i, commitments[&i].enc_key());
}
let mut secret_shares = HashMap::new();
let machines = machines
.drain()
.map(|(l, machine)| {
let (machine, mut shares) =
machine.generate_secret_shares(rng, clone_without(&commitments, &l)).unwrap();
let shares = shares
.drain()
.map(|(l, share)| {
(
l,
EncryptedMessage::read::<&[u8]>(
&mut share.serialize().as_ref(),
// Only t/n actually matters, so hardcode i to 1 here
ThresholdParams { t: THRESHOLD, n: PARTICIPANTS, i: Participant(1) },
)
.unwrap(),
)
})
.collect::<HashMap<_, _>>();
secret_shares.insert(l, shares);
(l, machine)
})
.collect::<HashMap<_, _>>();
(machines, commitments, enc_keys, secret_shares)
}
fn generate_secret_shares<C: Ciphersuite>(
shares: &HashMap<Participant, PedPoPSecretShares<C>>,
recipient: Participant,
) -> PedPoPSecretShares<C> {
let mut our_secret_shares = HashMap::new();
for (i, shares) in shares {
if recipient == *i {
continue;
}
our_secret_shares.insert(*i, shares[&recipient].clone());
}
our_secret_shares
}
/// Fully perform the PedPoP key generation algorithm.
pub fn pedpop_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
) -> HashMap<Participant, ThresholdCore<C>> {
let (mut machines, _, _, secret_shares) = commit_enc_keys_and_shares::<_, C>(rng);
let mut verification_shares = None;
let mut group_key = None;
machines
.drain()
.map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let these_keys = machine.calculate_share(rng, our_secret_shares).unwrap().complete();
// Verify the verification_shares are agreed upon
if verification_shares.is_none() {
verification_shares = Some(these_keys.verification_shares());
}
assert_eq!(verification_shares.as_ref().unwrap(), &these_keys.verification_shares());
// Verify the group keys are agreed upon
if group_key.is_none() {
group_key = Some(these_keys.group_key());
}
assert_eq!(group_key.unwrap(), these_keys.group_key());
(i, these_keys)
})
.collect::<HashMap<_, _>>()
}
#[cfg(test)]
mod literal {
use rand_core::OsRng;
use ciphersuite::Ristretto;
use crate::{
DkgError,
encryption::EncryptionKeyProof,
pedpop::{BlameMachine, AdditionalBlameMachine},
};
use super::*;
const ONE: Participant = Participant(1);
const TWO: Participant = Participant(2);
fn test_blame(
commitment_msgs: &HashMap<Participant, EncryptionKeyMessage<Ristretto, Commitments<Ristretto>>>,
machines: Vec<BlameMachine<Ristretto>>,
msg: &PedPoPEncryptedMessage<Ristretto>,
blame: &Option<EncryptionKeyProof<Ristretto>>,
) {
for machine in machines {
let (additional, blamed) = machine.blame(ONE, TWO, msg.clone(), blame.clone());
assert_eq!(blamed, ONE);
// Verify additional blame also works
assert_eq!(additional.blame(ONE, TWO, msg.clone(), blame.clone()), ONE);
// Verify machines constructed with AdditionalBlameMachine::new work
assert_eq!(
AdditionalBlameMachine::new(
&mut OsRng,
CONTEXT.to_string(),
PARTICIPANTS,
commitment_msgs.clone()
)
.unwrap()
.blame(ONE, TWO, msg.clone(), blame.clone()),
ONE,
);
}
}
// TODO: Write a macro which expands to the following
#[test]
fn invalid_encryption_pop_blame() {
let (mut machines, commitment_msgs, _, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
// Mutate the PoP of the encrypted message from 1 to 2
secret_shares.get_mut(&ONE).unwrap().get_mut(&TWO).unwrap().invalidate_pop();
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == TWO {
assert_eq!(machine.err(), Some(DkgError::InvalidShare { participant: ONE, blame: None }));
// Explicitly declare we have a blame object, which happens to be None since invalid PoP
// is self-explainable
blame = Some(None);
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
test_blame(&commitment_msgs, machines, &secret_shares[&ONE][&TWO].clone(), &blame.unwrap());
}
#[test]
fn invalid_ecdh_blame() {
let (mut machines, commitment_msgs, _, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
// Mutate the share to trigger a blame event
// Mutates from 2 to 1, as 1 is expected to end up malicious for test_blame to pass
// While here, 2 is malicious, this is so 1 creates the blame proof
// We then malleate 1's blame proof, so 1 ends up malicious
// Doesn't simply invalidate the PoP as that won't have a blame statement
// By mutating the encrypted data, we do ensure a blame statement is created
secret_shares
.get_mut(&TWO)
.unwrap()
.get_mut(&ONE)
.unwrap()
.invalidate_msg(&mut OsRng, CONTEXT, TWO);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == ONE {
blame = Some(match machine.err() {
Some(DkgError::InvalidShare { participant: TWO, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
blame.as_mut().unwrap().as_mut().unwrap().invalidate_key();
test_blame(&commitment_msgs, machines, &secret_shares[&TWO][&ONE].clone(), &blame.unwrap());
}
// This should be largely equivalent to the prior test
#[test]
fn invalid_dleq_blame() {
let (mut machines, commitment_msgs, _, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
secret_shares
.get_mut(&TWO)
.unwrap()
.get_mut(&ONE)
.unwrap()
.invalidate_msg(&mut OsRng, CONTEXT, TWO);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == ONE {
blame = Some(match machine.err() {
Some(DkgError::InvalidShare { participant: TWO, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
blame.as_mut().unwrap().as_mut().unwrap().invalidate_dleq();
test_blame(&commitment_msgs, machines, &secret_shares[&TWO][&ONE].clone(), &blame.unwrap());
}
#[test]
fn invalid_share_serialization_blame() {
let (mut machines, commitment_msgs, enc_keys, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
secret_shares.get_mut(&ONE).unwrap().get_mut(&TWO).unwrap().invalidate_share_serialization(
&mut OsRng,
CONTEXT,
ONE,
enc_keys[&TWO],
);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == TWO {
blame = Some(match machine.err() {
Some(DkgError::InvalidShare { participant: ONE, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
test_blame(&commitment_msgs, machines, &secret_shares[&ONE][&TWO].clone(), &blame.unwrap());
}
#[test]
fn invalid_share_value_blame() {
let (mut machines, commitment_msgs, enc_keys, mut secret_shares) =
commit_enc_keys_and_shares::<_, Ristretto>(&mut OsRng);
secret_shares.get_mut(&ONE).unwrap().get_mut(&TWO).unwrap().invalidate_share_value(
&mut OsRng,
CONTEXT,
ONE,
enc_keys[&TWO],
);
let mut blame = None;
let machines = machines
.drain()
.filter_map(|(i, machine)| {
let our_secret_shares = generate_secret_shares(&secret_shares, i);
let machine = machine.calculate_share(&mut OsRng, our_secret_shares);
if i == TWO {
blame = Some(match machine.err() {
Some(DkgError::InvalidShare { participant: ONE, blame: Some(blame) }) => Some(blame),
_ => panic!(),
});
None
} else {
Some(machine.unwrap())
}
})
.collect::<Vec<_>>();
test_blame(&commitment_msgs, machines, &secret_shares[&ONE][&TWO].clone(), &blame.unwrap());
}
}

Some files were not shown because too many files have changed in this diff Show More