59 Commits

Author SHA1 Message Date
Luke Parker
ca93c82156 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-11-16 18:25:23 -05:00
Luke Parker
5b1875dae6 Update lockfile after recent cherry picks 2025-11-16 18:21:02 -05:00
Luke Parker
bcd68441be Use Alpine to build the runtime
Smaller, works without issue.
2025-11-16 18:21:02 -05:00
Luke Parker
4ebf9ad9c7 Rust 1.91.1 due to the regression re: wasm builds 2025-11-16 18:21:02 -05:00
Luke Parker
807572199c Update misc versions 2025-11-16 18:21:02 -05:00
Luke Parker
3cdc1536c5 Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-16 18:21:01 -05:00
Luke Parker
9e13e5ebff Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-16 18:21:01 -05:00
Luke Parker
9b2c254eee Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-16 18:20:55 -05:00
Luke Parker
0883479068 Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-16 17:57:07 -05:00
Luke Parker
c5480c63be Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-16 17:56:51 -05:00
Luke Parker
4280ee6987 revm 33 2025-11-16 17:54:43 -05:00
Luke Parker
91673d7ae3 Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-16 17:52:58 -05:00
Luke Parker
927f07b62b Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-16 17:52:33 -05:00
Luke Parker
7e774d6d2d Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-16 17:52:26 -05:00
Luke Parker
fccd06b376 Bump revm 2025-11-16 17:52:23 -05:00
Luke Parker
e3edc0a7fc Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-11-16 17:51:38 -05:00
Luke Parker
9c47ef2658 Restore deny exception for kayabaNerve/elliptic-curves, accidentally dropped when merging develop 2025-11-04 13:18:59 -05:00
Luke Parker
e1b6b638c6 Merge branch 'develop' into next 2025-11-04 13:14:38 -05:00
Luke Parker
c24768f922 Fix borks from the latest nightly
The `cargo doc` build started to fail with the rolling of `doc_auto_cfg` into
`doc_cfg`, so now we don't build docs for deps (as we can't reasonably update
`generic-array` at this time).

`home` has been patched as we are able to, not as a direct requirement of this
PR.
2025-11-04 13:10:11 -05:00
Luke Parker
87ee879dea doc_auto_cfg -> doc_cfg 2025-11-04 10:20:17 -05:00
Luke Parker
b5603560e8 Merge branch 'develop' into next 2025-11-04 10:19:38 -05:00
Luke Parker
5818f1a41c Update nightly version 2025-11-04 10:05:08 -05:00
Luke Parker
1b781b4b57 Fix CI 2025-10-07 04:39:32 -04:00
Luke Parker
94faf098b6 Update nightly version 2025-10-05 18:44:04 -04:00
Luke Parker
03e45f73cd Merge branch 'develop' into next 2025-10-05 18:43:53 -04:00
Luke Parker
63f7e220c0 Update macOS labels in CI due to deprecation of macos-13 2025-10-05 10:59:40 -04:00
Luke Parker
7d49366373 Move develop to patch-polkadot-sdk (#678)
* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
2025-10-05 10:58:08 -04:00
Luke Parker
55ed33d2d1 Update to a version of Substrate which no longer cites our fork of substrate-bip39 2025-09-30 19:30:40 -04:00
Luke Parker
138a0e9b40 Resolve https://github.com/serai-dex/serai/issues/680 2025-09-30 01:05:17 -04:00
Luke Parker
4fc7263ac3 Make simple_request::Client generic to the executor
Part of https://github.com/serai-dex/serai/issues/682.

We don't remove the use of `tokio::sync::Mutex` now as `hyper` pulls in
`tokio::sync` anyways, so there's no point in replacing it. This doesn't yet
solve TLS for non-`tokio` `Client`s.
2025-09-30 01:05:12 -04:00
Luke Parker
f27fd59fa6 Update documentation within modular-frost
Resolves https://github.com/serai-dex/serai/issues/675.
2025-09-30 00:27:29 -04:00
Luke Parker
437f0e9a93 Remove serdect by removing the unnecessary "alloc" feature from crypto-bigint
This only works for the legacy `crypto-bigint` and downstream consumers who
don't have the modern `serdect` pulled in for independent reasons.
2025-09-26 20:58:45 -04:00
Luke Parker
cc5d38f1ce dkg-evrf Security Proofs (#681)
* Add audit statement for `dkg-evrf`

This doesn't cover the implementation, solely the academia and background.

Also moves the existing audit of the `crypto` folder for organizational
reasons.

* Add files via upload
2025-09-26 11:20:48 -04:00
Luke Parker
0ce025e0c2 Update build-dependencies CI action 2025-09-21 15:40:58 -04:00
Luke Parker
224cf4ea21 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:00:10 -04:00
Luke Parker
a9b1e5293c Support webpki-roots as a fallback in simple-request 2025-09-18 18:15:24 -04:00
Luke Parker
80009ab67f Tidy unused import 2025-09-18 17:49:37 -04:00
Luke Parker
df9fda2971 Fixes from errors in cherry-picked commits 2025-09-18 17:49:32 -04:00
Luke Parker
ca8afb83a1 simple-request 0.2.0 2025-09-18 17:41:31 -04:00
Luke Parker
18a9cf2535 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:41:31 -04:00
Luke Parker
10c126ad92 Misc updates 2025-09-18 17:41:25 -04:00
Luke Parker
19305aebc9 Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-18 17:06:57 -04:00
Luke Parker
be68e27551 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-18 17:06:42 -04:00
Luke Parker
d6d96fe8ff Correct std-shims feature flagging 2025-09-18 17:06:31 -04:00
Luke Parker
95909d83a4 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-18 17:06:05 -04:00
Luke Parker
3bd48974f3 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-18 17:05:19 -04:00
Luke Parker
29093715e3 Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-18 17:05:07 -04:00
Luke Parker
87b4dfc8f3 Expand std_shims::prelude to better match std::prelude 2025-09-18 17:04:54 -04:00
Luke Parker
4db78b1787 Add the ability to bound the response's size limit to simple-request 2025-09-18 17:04:41 -04:00
Luke Parker
02a5f15535 Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-18 17:04:10 -04:00
Luke Parker
865e351f96 Bitcoin 29.1
Benefits from `v2transport`, `mempoolfullrbf`, and potentially TRUC.
2025-09-06 04:09:39 -04:00
Luke Parker
ea275df26c Re-export curve25519_dalek::RistrettoPoint for dalek_ff_group::RistrettoPoint
Sacrifices a `Hash` implementation (inefficient and already shouldn't be used)
we appear to have only used in two files (which have been patched).
2025-09-05 17:40:44 -04:00
Luke Parker
2216ade8c4 Tweak how prime-field normalizes to the even square root 2025-09-04 20:48:15 -04:00
Luke Parker
5265cc69de hex-literal 1 2025-09-03 13:56:48 -04:00
Luke Parker
a141deaf36 Smash the singular Ciphersuite trait into multiple
This helps identify where the various functionalities are used, or rather, not
used. The `Ciphersuite` trait present in `patches/ciphersuite`, facilitating
the entire FCMP++ tree, only requires the markers _and_ canonical point
decoding. I've opened a PR to upstream such a trait into `group`
(https://github.com/zkcrypto/group/pull/68).

`WrappedGroup` is still justified for as long as `Group::generator` exists.
Moving `::generator()` to its own trait, on an independent structure (upstream)
would be massively appreciated. @tarcieri also wanted to update from
`fn generator()` to `const GENERATOR`, which would encourage further discussion
on https://github.com/zkcrypto/group/issues/32 and
https://github.com/zkcrypto/group/issues/45, which have been stagnant.

The `Id` trait is occasionally used yet really should be first off the chopping
block.

Finally, `WithPreferredHash` is only actually used around a third of the time,
which more than justifies it being a separate trait.

---

Updates `dalek_ff_group::Scalar` to directly re-export
`curve25519_dalek::Scalar`, as without issue. `dalek_ff_group::RistrettoPoint`
also could be replaced with an export of `curve25519_dalek::RistrettoPoint`,
yet the coordinator relies on how we implemented `Hash` on it for the hell of
it so it isn't worth it at this time. `dalek_ff_group::EdwardsPoint` can't be
replaced for an re-export of `curve25519_dalek::SubgroupPoint` as it doesn't
implement `zeroize`, `subtle` traits within a released, non-yanked version.
Relevance to https://github.com/serai-dex/serai/issues/201 and
https://github.com/dalek-cryptography/curve25519-dalek/issues/811#issuecomment-3247732746.

Also updates the `Ristretto` ciphersuite to prefer `Blake2b-512` over
`SHA2-512`. In order to maintain compliance with FROST's IETF standard,
`modular-frost` defines its own ciphersuite for Ristretto which still uses
`SHA2-512`.
2025-09-03 13:50:20 -04:00
Luke Parker
215e41fdb6 Remove deprecated APIs from dalek-ff-group
For backwards compatibility, we now use as a patch (as prior done with
`ciphersuite`).

Removes `crypto-bigint 0.5` from the tree and shapes up what the next release
will look like.
2025-09-03 07:05:50 -04:00
Luke Parker
41c34d7f11 Remove crypto-bigint from the public API of prime-field 2025-09-03 07:05:45 -04:00
Luke Parker
974bc82387 Remove unnecessary to_string for clone 2025-09-03 06:11:32 -04:00
Luke Parker
47ef24a7cc Remove unused patch for parking_lot_core 2025-09-03 06:11:32 -04:00
348 changed files with 6157 additions and 7130 deletions

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: "27.0"
default: "30.0"
runs:
using: "composite"

View File

@@ -7,6 +7,10 @@ runs:
- name: Remove unused packages
shell: bash
run: |
# Ensure the repositories are synced
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
@@ -14,8 +18,9 @@ runs:
sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
# Reinstall python3 as a general dependency of a functional operating system
sudo apt install python3
sudo apt install -y python3 --fix-missing
if: runner.os == 'Linux'
- name: Remove unused packages
@@ -33,19 +38,23 @@ runs:
shell: bash
run: |
if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler
sudo apt install -y ca-certificates protobuf-compiler libclang-dev
elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf
brew install protobuf llvm
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi
- name: Install solc
shell: bash
run: |
cargo +1.89 install svm-rs --version =0.5.18
svm install 0.8.26
svm use 0.8.26
cargo +1.91 install svm-rs --version =0.5.19
svm install 0.8.29
svm use 0.8.29
- name: Remove preinstalled Docker
shell: bash

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.4
default: v0.18.4.3
runs:
using: "composite"

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.4
default: v0.18.4.3
runs:
using: "composite"

View File

@@ -5,12 +5,12 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.3.4
default: v0.18.4.3
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: "27.1"
default: "30.0"
runs:
using: "composite"

View File

@@ -1 +1 @@
nightly-2025-09-01
nightly-2025-11-11

View File

@@ -18,7 +18,7 @@ jobs:
key: rust-advisory-db
- name: Install cargo deny
run: cargo +1.89 install cargo-deny --version =0.18.3
run: cargo +1.91 install cargo-deny --version =0.18.5
- name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -11,7 +11,7 @@ jobs:
clippy:
strategy:
matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest]
os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-src -c clippy
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -52,7 +52,7 @@ jobs:
key: rust-advisory-db
- name: Install cargo deny
run: cargo +1.89 install cargo-deny --version =0.18.3
run: cargo +1.91 install cargo-deny --version =0.18.5
- name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph
@@ -88,8 +88,8 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use
run: |
cargo +1.89 install cargo-machete --version =0.8.0
cargo +1.89 machete
cargo +1.91 install cargo-machete --version =0.9.1
cargo +1.91 machete
msrv:
runs-on: ubuntu-latest
@@ -98,7 +98,7 @@ jobs:
- name: Verify claimed `rust-version`
shell: bash
run: |
cargo +1.89 install cargo-msrv --version =0.18.4
cargo +1.91 install cargo-msrv --version =0.18.4
function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the
@@ -144,18 +144,17 @@ jobs:
function check_workspace {
# Get the members array from the workspace's `Cargo.toml`
cargo_toml_lines=$(cat ./Cargo.toml | wc -l)
# Keep all lines after the start of the array, then keep all lines before the next "]"
members=$(cat Cargo.toml | grep "members\ \=\ \[" -m1 -A$cargo_toml_lines | grep "]" -m1 -B$cargo_toml_lines)
# Parse out any comments, including comments post-fixed on the same line as an entry
members=$(echo "$members" | grep -Ev "^[[:space:]]+#" | grep -Ev "^[[:space:]]?$" | awk -F',' '{print $1","}')
# Prune `members = [` to `[` by replacing the first line with just `[`
# Parse out any comments, whitespace, including comments post-fixed on the same line as an entry
# We accomplish the latter by pruning all characters after the entry's ","
members=$(echo "$members" | grep -Ev "^[[:space:]]*(#|$)" | awk -F',' '{print $1","}')
# Replace the first line, which was "members = [" and is now "members = [,", with "["
members=$(echo "$members" | sed "1s/.*/\[/")
# Remove the trailing comma by replacing the last line's "," with ""
members=$(echo "$members" | sed "$(($(echo "$members" | wc -l) - 1))s/\,//")
# Correct the last line, which was malleated to "]," when pruning comments
# Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the patches
members=$(echo "$members" | grep -v "patches")
# Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline
@@ -174,6 +173,9 @@ jobs:
members=$(echo "$members" | grep -v "mini\"")
members=$(echo "$members" | grep -v "tests/")
# Remove the trailing comma by replacing the last line's "," with ""
members=$(echo "$members" | sed "$(($(echo "$members" | wc -l) - 1))s/\,//")
echo $members | jq -r ".[]" | while read -r member; do
check_msrv $member
correct=$?
@@ -188,12 +190,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither
run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol

View File

@@ -70,7 +70,7 @@ jobs:
- name: Buld Rust docs
run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --all-features
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features
mv target/doc docs/_site/rust
- name: Upload artifact

7
.gitignore vendored
View File

@@ -1,7 +1,14 @@
target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!./Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs
.vscode

6435
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,25 +1,6 @@
[workspace]
resolver = "2"
members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/rocksdb",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
# monero-oxide expects ciphersuite, yet the ciphersuite in-tree here has breaking changes
# This re-exports the in-tree ciphersuite _without_ changes breaking to monero-oxide
# Not included in workspace to prevent having two crates with the same name (an error)
# "patches/ciphersuite",
"common/std-shims",
"common/zalloc",
"common/patchable-async-sleep",
@@ -191,31 +172,32 @@ panic = "unwind"
overflow-checks = true
[patch.crates-io]
# Point to empty crates for unused crates in our tree
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
# Dependencies from monero-oxide which originate from within our own tree
std-shims = { path = "common/std-shims" }
simple-request = { path = "common/request" }
std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" }
flexible-transcript = { path = "crypto/transcript" }
ciphersuite = { path = "patches/ciphersuite" }
dalek-ff-group = { path = "crypto/dalek-ff-group" }
dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
matches = { path = "patches/matches" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers
@@ -224,12 +206,15 @@ matches = { path = "patches/matches" }
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# Patch to include `FromUniformBytes<64>` over Scalar
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# https://github.com/RustCrypto/hybrid-array/issues/131
hybrid-array = { git = "https://github.com/kayabaNerve/hybrid-array", rev = "8caa508976c93696a67f40734537c91be7cecd96" }
# `jemalloc` conflicts with `mimalloc`, so patch to a `rocksdb` which never uses `jemalloc`
librocksdb-sys = { path = "patches/librocksdb-sys" }
[workspace.lints.clippy]
unwrap_or_default = "allow"
@@ -275,7 +260,7 @@ redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
string_slice = "deny"
unchecked_duration_subtraction = "deny"
unchecked_time_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny"
unnecessary_join = "deny"
@@ -284,3 +269,6 @@ unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
zero_sized_map_values = "deny"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648

View File

@@ -0,0 +1,50 @@
# eVRF DKG
In 2024, the [eVRF paper](https://eprint.iacr.org/2024/397) was published to
the IACR preprint server. Within it was a one-round unbiased DKG and a
one-round unbiased threshold DKG. Unfortunately, both simply describe
communication of the secret shares as 'Alice sends $s_b$ to Bob'. This causes,
in practice, the need for an additional round of communication to occur where
all participants confirm they received their secret shares.
Within Serai, it was posited to use the same premises as the DDH eVRF itself to
achieve a verifiable encryption scheme. This allows the secret shares to be
posted to any 'bulletin board' (such as a blockchain) and for all observers to
confirm:
- A participant participated
- The secret shares sent can be received by the intended recipient so long as
they can access the bulletin board
Additionally, Serai desired a robust scheme (albeit with an biased key as the
output, which is fine for our purposes). Accordingly, our implementation
instantiates the threshold eVRF DKG from the eVRF paper, with our own proposal
for verifiable encryption, with the caller allowed to decide the set of
participants. They may:
- Select everyone, collapsing to the non-threshold unbiased DKG from the eVRF
paper
- Select a pre-determined set, collapsing to the threshold unbaised DKG from
the eVRF paper
- Select a post-determined set (with any solution for the Common Subset
problem), allowing achieving a robust threshold biased DKG
Note that the eVRF paper proposes using the eVRF to sample coefficients yet
this is unnecessary when the resulting key will be biased. Any proof of
knowledge for the coefficients, as necessary for their extraction within the
security proofs, would be sufficient.
MAGIC Grants contracted HashCloak to formalize Serai's proposal for a DKG and
provide proofs for its security. This resulted in
[this paper](<./Security Proofs.pdf>).
Our implementation itself is then built on top of the audited
[`generalized-bulletproofs`](https://github.com/kayabaNerve/monero-oxide/tree/generalized-bulletproofs/audits/crypto/generalized-bulletproofs)
and
[`generalized-bulletproofs-ec-gadgets`](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/fcmps).
Note we do not use the originally premised DDH eVRF yet the one premised on
elliptic curve divisors, the methodology of which is commented on
[here](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/divisors).
Our implementation itself is unaudited at this time however.

Binary file not shown.

View File

@@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
parity-db = { version = "0.5", default-features = false, optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features]

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
// Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> {

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -1,9 +1,9 @@
[package]
name = "simple-request"
version = "0.1.0"
version = "0.3.0"
description = "A simple HTTP(S) request library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-request"
repository = "https://github.com/serai-dex/serai/tree/develop/common/request"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021"
@@ -19,9 +19,10 @@ workspace = true
[dependencies]
tower-service = { version = "0.3", default-features = false }
hyper = { version = "1", default-features = false, features = ["http1", "client"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy", "tokio"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy"] }
http-body-util = { version = "0.1", default-features = false }
tokio = { version = "1", default-features = false }
futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] }
hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
@@ -29,6 +30,8 @@ zeroize = { version = "1", optional = true }
base64ct = { version = "1", features = ["alloc"], optional = true }
[features]
tls = ["hyper-rustls"]
tokio = ["hyper-util/tokio"]
tls = ["tokio", "hyper-rustls"]
webpki-roots = ["tls", "hyper-rustls/webpki-roots"]
basic-auth = ["zeroize", "base64ct"]
default = ["tls"]

View File

@@ -1,19 +1,20 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
use core::{pin::Pin, future::Future};
use std::sync::Arc;
use tokio::sync::Mutex;
use futures_util::FutureExt;
use ::tokio::sync::Mutex;
use tower_service::Service as TowerService;
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest, rt::Executor};
pub use hyper;
use hyper_util::client::legacy::{Client as HyperClient, connect::HttpConnector};
#[cfg(feature = "tls")]
use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector};
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest};
use hyper_util::{
rt::tokio::TokioExecutor,
client::legacy::{Client as HyperClient, connect::HttpConnector},
};
pub use hyper;
mod request;
pub use request::*;
@@ -37,52 +38,86 @@ type Connector = HttpConnector;
type Connector = HttpsConnector<HttpConnector>;
#[derive(Clone, Debug)]
enum Connection {
enum Connection<
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
ConnectionPool(HyperClient<Connector, Full<Bytes>>),
Connection {
executor: E,
connector: Connector,
host: Uri,
connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>,
},
}
/// An HTTP client.
///
/// `tls` is only guaranteed to work when using the `tokio` executor. Instantiating a client when
/// the `tls` feature is active without using the `tokio` executor will cause errors.
#[derive(Clone, Debug)]
pub struct Client {
connection: Connection,
pub struct Client<
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
connection: Connection<E>,
}
impl Client {
fn connector() -> Connector {
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Client<E>
{
#[allow(clippy::unnecessary_wraps)]
fn connector() -> Result<Connector, Error> {
let mut res = HttpConnector::new();
res.set_keepalive(Some(core::time::Duration::from_secs(60)));
res.set_nodelay(true);
res.set_reuse_address(true);
#[cfg(feature = "tls")]
if core::any::TypeId::of::<E>() !=
core::any::TypeId::of::<hyper_util::rt::tokio::TokioExecutor>()
{
Err(Error::ConnectionError(
"`tls` feature enabled but not using the `tokio` executor".into(),
))?;
}
#[cfg(feature = "tls")]
res.enforce_http(false);
#[cfg(feature = "tls")]
let res = HttpsConnectorBuilder::new()
.with_native_roots()
.expect("couldn't fetch system's SSL roots")
.https_or_http()
.enable_http1()
.wrap_connector(res);
res
let https = HttpsConnectorBuilder::new().with_native_roots();
#[cfg(all(feature = "tls", not(feature = "webpki-roots")))]
let https = https.map_err(|e| {
Error::ConnectionError(
format!("couldn't load system's SSL root certificates and webpki-roots unavilable: {e:?}")
.into(),
)
})?;
// Fallback to `webpki-roots` if present
#[cfg(all(feature = "tls", feature = "webpki-roots"))]
let https = https.unwrap_or(HttpsConnectorBuilder::new().with_webpki_roots());
#[cfg(feature = "tls")]
let res = https.https_or_http().enable_http1().wrap_connector(res);
Ok(res)
}
pub fn with_connection_pool() -> Client {
Client {
pub fn with_executor_and_connection_pool(executor: E) -> Result<Client<E>, Error> {
Ok(Client {
connection: Connection::ConnectionPool(
HyperClient::builder(TokioExecutor::new())
HyperClient::builder(executor)
.pool_idle_timeout(core::time::Duration::from_secs(60))
.build(Self::connector()),
.build(Self::connector()?),
),
}
})
}
pub fn without_connection_pool(host: &str) -> Result<Client, Error> {
pub fn with_executor_and_without_connection_pool(
executor: E,
host: &str,
) -> Result<Client<E>, Error> {
Ok(Client {
connection: Connection::Connection {
connector: Self::connector(),
executor,
connector: Self::connector()?,
host: {
let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?;
if uri.host().is_none() {
@@ -95,9 +130,9 @@ impl Client {
})
}
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_>, Error> {
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_, E>, Error> {
let request: Request = request.into();
let mut request = request.0;
let Request { mut request, response_size_limit } = request;
if let Some(header_host) = request.headers().get(hyper::header::HOST) {
match &self.connection {
Connection::ConnectionPool(_) => {}
@@ -131,7 +166,7 @@ impl Client {
Connection::ConnectionPool(client) => {
client.request(request).await.map_err(Error::HyperUtil)?
}
Connection::Connection { connector, host, connection } => {
Connection::Connection { executor, connector, host, connection } => {
let mut connection_lock = connection.lock().await;
// If there's not a connection...
@@ -143,28 +178,46 @@ impl Client {
let call_res = call_res.map_err(Error::ConnectionError);
let (requester, connection) =
hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?;
// This will die when we drop the requester, so we don't need to track an AbortHandle
// for it
tokio::spawn(connection);
// This task will die when we drop the requester
executor.execute(Box::pin(connection.map(|_| ())));
*connection_lock = Some(requester);
}
let connection = connection_lock.as_mut().unwrap();
let connection = connection_lock.as_mut().expect("lock over the connection was poisoned");
let mut err = connection.ready().await.err();
if err.is_none() {
// Send the request
let res = connection.send_request(request).await;
if let Ok(res) = res {
return Ok(Response(res, self));
let response = connection.send_request(request).await;
if let Ok(response) = response {
return Ok(Response { response, size_limit: response_size_limit, client: self });
}
err = res.err();
err = response.err();
}
// Since this connection has been put into an error state, drop it
*connection_lock = None;
Err(Error::Hyper(err.unwrap()))?
Err(Error::Hyper(err.expect("only here if `err` is some yet no error")))?
}
};
Ok(Response(response, self))
Ok(Response { response, size_limit: response_size_limit, client: self })
}
}
#[cfg(feature = "tokio")]
mod tokio {
use hyper_util::rt::tokio::TokioExecutor;
use super::*;
pub type TokioClient = Client<TokioExecutor>;
impl Client<TokioExecutor> {
pub fn with_connection_pool() -> Result<Self, Error> {
Self::with_executor_and_connection_pool(TokioExecutor::new())
}
pub fn without_connection_pool(host: &str) -> Result<Self, Error> {
Self::with_executor_and_without_connection_pool(TokioExecutor::new(), host)
}
}
}
#[cfg(feature = "tokio")]
pub use tokio::TokioClient;

View File

@@ -7,11 +7,15 @@ pub use http_body_util::Full;
use crate::Error;
#[derive(Debug)]
pub struct Request(pub(crate) hyper::Request<Full<Bytes>>);
pub struct Request {
pub(crate) request: hyper::Request<Full<Bytes>>,
pub(crate) response_size_limit: Option<usize>,
}
impl Request {
#[cfg(feature = "basic-auth")]
fn username_password_from_uri(&self) -> Result<(String, String), Error> {
if let Some(authority) = self.0.uri().authority() {
if let Some(authority) = self.request.uri().authority() {
let authority = authority.as_str();
if authority.contains('@') {
// Decode the username and password from the URI
@@ -36,9 +40,10 @@ impl Request {
let mut formatted = format!("{username}:{password}");
let mut encoded = Base64::encode_string(formatted.as_bytes());
formatted.zeroize();
self.0.headers_mut().insert(
self.request.headers_mut().insert(
hyper::header::AUTHORIZATION,
HeaderValue::from_str(&format!("Basic {encoded}")).unwrap(),
HeaderValue::from_str(&format!("Basic {encoded}"))
.expect("couldn't form header from base64-encoded string"),
);
encoded.zeroize();
}
@@ -59,9 +64,17 @@ impl Request {
pub fn with_basic_auth(&mut self) {
let _ = self.basic_auth_from_uri();
}
}
impl From<hyper::Request<Full<Bytes>>> for Request {
fn from(request: hyper::Request<Full<Bytes>>) -> Request {
Request(request)
/// Set a size limit for the response.
///
/// This may be exceeded by a single HTTP frame and accordingly isn't perfect.
pub fn set_response_size_limit(&mut self, response_size_limit: Option<usize>) {
self.response_size_limit = response_size_limit;
}
}
impl From<hyper::Request<Full<Bytes>>> for Request {
fn from(request: hyper::Request<Full<Bytes>>) -> Request {
Request { request, response_size_limit: None }
}
}

View File

@@ -1,24 +1,54 @@
use core::{pin::Pin, future::Future};
use std::io;
use hyper::{
StatusCode,
header::{HeaderValue, HeaderMap},
body::{Buf, Incoming},
body::Incoming,
rt::Executor,
};
use http_body_util::BodyExt;
use futures_util::{Stream, StreamExt};
use crate::{Client, Error};
// Borrows the client so its async task lives as long as this response exists.
#[allow(dead_code)]
#[derive(Debug)]
pub struct Response<'a>(pub(crate) hyper::Response<Incoming>, pub(crate) &'a Client);
impl Response<'_> {
pub struct Response<
'a,
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
pub(crate) response: hyper::Response<Incoming>,
pub(crate) size_limit: Option<usize>,
pub(crate) client: &'a Client<E>,
}
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Response<'_, E>
{
pub fn status(&self) -> StatusCode {
self.0.status()
self.response.status()
}
pub fn headers(&self) -> &HeaderMap<HeaderValue> {
self.0.headers()
self.response.headers()
}
pub async fn body(self) -> Result<impl std::io::Read, Error> {
Ok(self.0.into_body().collect().await.map_err(Error::Hyper)?.aggregate().reader())
let mut body = self.response.into_body().into_data_stream();
let mut res: Vec<u8> = vec![];
loop {
if let Some(size_limit) = self.size_limit {
let (lower, upper) = body.size_hint();
if res.len().wrapping_add(upper.unwrap_or(lower)) > size_limit.min(usize::MAX - 1) {
Err(Error::ConnectionError("response exceeded size limit".into()))?;
}
}
let Some(part) = body.next().await else { break };
let part = part.map_err(Error::Hyper)?;
res.extend(part.as_ref());
}
Ok(io::Cursor::new(res))
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "std-shims"
version = "0.1.4"
version = "0.1.5"
description = "A series of std shims to make alloc more feasible"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
@@ -18,9 +18,10 @@ workspace = true
[dependencies]
rustversion = { version = "1", default-features = false }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "once", "lazy"] }
hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "fair_mutex", "once", "lazy"] }
hashbrown = { version = "0.16", default-features = false, features = ["default-hasher", "inline-more"], optional = true }
[features]
std = []
alloc = ["hashbrown"]
std = ["alloc", "spin/std"]
default = ["std"]

View File

@@ -1,11 +1,28 @@
# std shims
# `std` shims
A crate which passes through to std when the default `std` feature is enabled,
yet provides a series of shims when it isn't.
`std-shims` is a Rust crate with two purposes:
- Expand the functionality of `core` and `alloc`
- Polyfill functionality only available on newer version of Rust
No guarantee of one-to-one parity is provided. The shims provided aim to be sufficient for the
average case.
The goal is to make supporting no-`std` environments, and older versions of
Rust, as simple as possible. For most use cases, replacing `std::` with
`std_shims::` and adding `use std_shims::prelude::*` is sufficient to take full
advantage of `std-shims`.
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization primitives are provided via
`spin` (avoiding a requirement on `critical-section`).
types are not guaranteed to be
# API Surface
`std-shims` only aims to have items _mutually available_ between `alloc` (with
extra dependencies) and `std` publicly exposed. Items exclusive to `std`, with
no shims available, will not be exported by `std-shims`.
# Dependencies
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization
primitives are provided via `spin` (avoiding a requirement on
`critical-section`). Sections of `std::io` are independently matched as
possible. `rustversion` is used to detect when to provide polyfills.
# Disclaimer
No guarantee of one-to-one parity is provided. The shims provided aim to be
sufficient for the average case. Pull requests are _welcome_.

View File

@@ -1,7 +1,7 @@
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::collections::*;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use hashbrown::{HashSet, HashMap};
#[cfg(feature = "std")]
pub use std::collections::*;
#[cfg(not(feature = "std"))]
pub use alloc::collections::*;
#[cfg(not(feature = "std"))]
pub use hashbrown::{HashSet, HashMap};

View File

@@ -1,42 +1,74 @@
#[cfg(feature = "std")]
pub use std::io::*;
#[cfg(not(feature = "std"))]
mod shims {
use core::fmt::{Debug, Formatter};
use alloc::{boxed::Box, vec::Vec};
use core::fmt::{self, Debug, Display, Formatter};
#[cfg(feature = "alloc")]
use extern_alloc::{boxed::Box, vec::Vec};
use crate::error::Error as CoreError;
/// The kind of error.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum ErrorKind {
UnexpectedEof,
Other,
}
/// An error.
#[derive(Debug)]
pub struct Error {
kind: ErrorKind,
error: Box<dyn Send + Sync>,
#[cfg(feature = "alloc")]
error: Box<dyn Send + Sync + CoreError>,
}
impl Debug for Error {
fn fmt(&self, fmt: &mut Formatter<'_>) -> core::result::Result<(), core::fmt::Error> {
fmt.debug_struct("Error").field("kind", &self.kind).finish_non_exhaustive()
impl Display for Error {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
<Self as Debug>::fmt(self, f)
}
}
impl CoreError for Error {}
#[cfg(not(feature = "alloc"))]
pub trait IntoBoxSendSyncError {}
#[cfg(not(feature = "alloc"))]
impl<I> IntoBoxSendSyncError for I {}
#[cfg(feature = "alloc")]
pub trait IntoBoxSendSyncError: Into<Box<dyn Send + Sync + CoreError>> {}
#[cfg(feature = "alloc")]
impl<I: Into<Box<dyn Send + Sync + CoreError>>> IntoBoxSendSyncError for I {}
impl Error {
pub fn new<E: 'static + Send + Sync>(kind: ErrorKind, error: E) -> Error {
Error { kind, error: Box::new(error) }
/// Create a new error.
///
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn new<E: 'static + IntoBoxSendSyncError>(kind: ErrorKind, error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind };
#[cfg(feature = "alloc")]
let res = Error { kind, error: error.into() };
res
}
pub fn other<E: 'static + Send + Sync>(error: E) -> Error {
Error { kind: ErrorKind::Other, error: Box::new(error) }
/// Create a new error with `io::ErrorKind::Other` as its kind.
///
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn other<E: 'static + IntoBoxSendSyncError>(error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind: ErrorKind::Other };
#[cfg(feature = "alloc")]
let res = Error { kind: ErrorKind::Other, error: error.into() };
res
}
/// The kind of error.
pub fn kind(&self) -> ErrorKind {
self.kind
}
pub fn into_inner(self) -> Option<Box<dyn Send + Sync>> {
/// Retrieve the inner error.
#[cfg(feature = "alloc")]
pub fn into_inner(self) -> Option<Box<dyn Send + Sync + CoreError>> {
Some(self.error)
}
}
@@ -64,6 +96,12 @@ mod shims {
}
}
impl<R: Read> Read for &mut R {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
R::read(*self, buf)
}
}
pub trait BufRead: Read {
fn fill_buf(&mut self) -> Result<&[u8]>;
fn consume(&mut self, amt: usize);
@@ -88,6 +126,7 @@ mod shims {
}
}
#[cfg(feature = "alloc")]
impl Write for Vec<u8> {
fn write(&mut self, buf: &[u8]) -> Result<usize> {
self.extend(buf);
@@ -95,6 +134,8 @@ mod shims {
}
}
}
#[cfg(not(feature = "std"))]
pub use shims::*;
#[cfg(feature = "std")]
pub use std::io::{ErrorKind, Error, Result, Read, BufRead, Write};

View File

@@ -1,18 +1,45 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
pub extern crate alloc;
#[cfg(not(feature = "alloc"))]
pub use core::*;
#[cfg(not(feature = "alloc"))]
pub use core::{alloc, borrow, ffi, fmt, slice, str, task};
#[cfg(not(feature = "std"))]
#[rustversion::before(1.81)]
pub mod error {
use core::fmt::Debug::Display;
pub trait Error: Debug + Display {}
}
#[cfg(not(feature = "std"))]
#[rustversion::since(1.81)]
pub use core::error;
#[cfg(feature = "alloc")]
extern crate alloc as extern_alloc;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::{alloc, borrow, boxed, ffi, fmt, rc, slice, str, string, task, vec, format};
#[cfg(feature = "std")]
pub use std::{alloc, borrow, boxed, error, ffi, fmt, rc, slice, str, string, task, vec, format};
pub mod sync;
pub mod collections;
pub mod io;
pub use alloc::vec;
pub use alloc::str;
pub use alloc::string;
pub mod sync;
pub mod prelude {
// Shim the `std` prelude
#[cfg(feature = "alloc")]
pub use extern_alloc::{
format, vec,
borrow::ToOwned,
boxed::Box,
vec::Vec,
string::{String, ToString},
};
// Shim `div_ceil`
#[rustversion::before(1.73)]
#[doc(hidden)]
pub trait StdShimsDivCeil {
@@ -53,6 +80,7 @@ pub mod prelude {
}
}
// Shim `io::Error::other`
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
#[doc(hidden)]

View File

@@ -1,19 +1,28 @@
pub use core::sync::*;
pub use alloc::sync::*;
pub use core::sync::atomic;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::sync::{Arc, Weak};
#[cfg(feature = "std")]
pub use std::sync::{Arc, Weak};
mod mutex_shim {
#[cfg(feature = "std")]
pub use std::sync::*;
#[cfg(not(feature = "std"))]
pub use spin::*;
pub use spin::{Mutex, MutexGuard};
#[cfg(feature = "std")]
pub use std::sync::{Mutex, MutexGuard};
/// A shimmed `Mutex` with an API mutual to `spin` and `std`.
#[derive(Default, Debug)]
pub struct ShimMutex<T>(Mutex<T>);
impl<T> ShimMutex<T> {
/// Construct a new `Mutex`.
pub const fn new(value: T) -> Self {
Self(Mutex::new(value))
}
/// Acquire a lock on the contents of the `Mutex`.
///
/// On no-`std` environments, this may spin until the lock is acquired. On `std` environments,
/// this may panic if the `Mutex` was poisoned.
pub fn lock(&self) -> MutexGuard<'_, T> {
#[cfg(feature = "std")]
let res = self.0.lock().unwrap();
@@ -25,10 +34,11 @@ mod mutex_shim {
}
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::before(1.80)]
#[cfg(feature = "std")]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(feature = "std")]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))]
//! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation.

View File

@@ -24,10 +24,8 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] }
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] }
ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std", "aggregate"] }
dkg = { package = "dkg-musig", path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -35,7 +35,7 @@ tributary-sdk = { path = "../../tributary-sdk" }
futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] }
libp2p = { version = "0.54", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] }
libp2p = { version = "0.56", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] }
log = { version = "0.4", default-features = false, features = ["std"] }
serai-task = { path = "../../../common/task", version = "0.1" }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -92,7 +92,8 @@ impl SwarmTask {
}
}
gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}
gossip::Event::GossipsubNotSupported { peer_id } => {
gossip::Event::GossipsubNotSupported { peer_id } |
gossip::Event::SlowPeer { peer_id, .. } => {
let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id);
}
}

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -103,7 +103,7 @@ mod _internal_db {
// Tributary transactions to publish from the DKG confirmation task
TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction,
// Participants to remove
RemoveParticipant: (set: ExternalValidatorSet) -> Participant,
RemoveParticipant: (set: ExternalValidatorSet) -> u16,
}
}
}
@@ -139,10 +139,11 @@ impl RemoveParticipant {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::RemoveParticipant::send(txn, set, &participant);
_internal_db::RemoveParticipant::send(txn, set, &u16::from(participant));
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> {
_internal_db::RemoveParticipant::try_recv(txn, set)
.map(|i| Participant::new(i).expect("sent invalid participant index for removal"))
}
}

View File

@@ -3,11 +3,10 @@ use std::{boxed::Box, collections::HashMap};
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, *};
use dkg::{Participant, musig};
use frost_schnorrkel::{
frost::{FrostError, sign::*},
frost::{curve::Ristretto, FrostError, sign::*},
Schnorrkel,
};
@@ -31,7 +30,7 @@ fn schnorrkel() -> Schnorrkel {
fn our_i(
set: &NewSetInformation,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
data: &HashMap<Participant, Vec<u8>>,
) -> Participant {
let public = SeraiAddress((Ristretto::generator() * key.deref()).to_bytes());
@@ -125,7 +124,7 @@ pub(crate) struct ConfirmDkgTask<CD: DbTrait, TD: DbTrait> {
set: NewSetInformation,
tributary_db: TD,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
signer: Option<Signer>,
}
@@ -134,7 +133,7 @@ impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
db: CD,
set: NewSetInformation,
tributary_db: TD,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
) -> Self {
Self { db, set, tributary_db, key, signer: None }
}
@@ -153,7 +152,7 @@ impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
db: &mut CD,
set: ExternalValidatorSet,
attempt: u32,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
signer: &mut Option<Signer>,
) {
// Perform the preprocess

View File

@@ -7,7 +7,7 @@ use rand_core::{RngCore, OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::PrimeField, GroupEncoding},
Ciphersuite,
*,
};
use borsh::BorshDeserialize;
@@ -284,7 +284,7 @@ async fn handle_network(
&mut txn,
ExternalValidatorSet { network, session },
slash_report,
Signature(signature),
Signature::from(signature),
);
}
},
@@ -352,7 +352,7 @@ async fn main() {
let mut key_bytes = [0; 32];
key_bytes.copy_from_slice(&key_vec);
key_vec.zeroize();
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::from_repr(key_bytes).unwrap());
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::from_repr(key_bytes).unwrap());
key_bytes.zeroize();
key
};
@@ -439,7 +439,7 @@ async fn main() {
EphemeralEventStream::new(
db.clone(),
serai.clone(),
SeraiAddress((<Ristretto as Ciphersuite>::generator() * serai_key.deref()).to_bytes()),
SeraiAddress((<Ristretto as WrappedGroup>::generator() * serai_key.deref()).to_bytes()),
)
.continually_run(substrate_ephemeral_task_def, vec![substrate_task]),
);

View File

@@ -3,7 +3,7 @@ use std::sync::Arc;
use zeroize::Zeroizing;
use ciphersuite::Ciphersuite;
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc;
@@ -23,7 +23,7 @@ use serai_coordinator_p2p::P2p;
use crate::{Db, KeySet};
pub(crate) struct SubstrateTask<P: P2p> {
pub(crate) serai_key: Zeroizing<<Ristretto as Ciphersuite>::F>,
pub(crate) serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>,
pub(crate) db: Db,
pub(crate) message_queue: Arc<MessageQueue>,
pub(crate) p2p: P,

View File

@@ -4,7 +4,7 @@ use std::sync::Arc;
use zeroize::Zeroizing;
use rand_core::OsRng;
use blake2::{digest::typenum::U32, Digest, Blake2s};
use ciphersuite::Ciphersuite;
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc;
@@ -160,7 +160,7 @@ impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan
#[must_use]
async fn add_signed_unsigned_transaction<TD: DbTrait, P: P2p>(
tributary: &Tributary<TD, Transaction, P>,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
mut tx: Transaction,
) -> bool {
// If this is a signed transaction, sign it
@@ -213,7 +213,7 @@ async fn add_with_recognition_check<TD: DbTrait, P: P2p>(
set: ExternalValidatorSet,
tributary_db: &mut TD,
tributary: &Tributary<TD, Transaction, P>,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
tx: Transaction,
) -> bool {
let kind = tx.kind();
@@ -252,7 +252,7 @@ pub(crate) struct AddTributaryTransactionsTask<CD: DbTrait, TD: DbTrait, P: P2p>
tributary_db: TD,
tributary: Tributary<TD, Transaction, P>,
set: NewSetInformation,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
}
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for AddTributaryTransactionsTask<CD, TD, P> {
type Error = DoesNotError;
@@ -382,7 +382,7 @@ pub(crate) struct SignSlashReportTask<CD: DbTrait, TD: DbTrait, P: P2p> {
tributary_db: TD,
tributary: Tributary<TD, Transaction, P>,
set: NewSetInformation,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
}
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for SignSlashReportTask<CD, TD, P> {
type Error = DoesNotError;
@@ -470,7 +470,7 @@ pub(crate) async fn spawn_tributary<P: P2p>(
p2p: P,
p2p_add_tributary: &mpsc::UnboundedSender<(ExternalValidatorSet, Tributary<Db, Transaction, P>)>,
set: NewSetInformation,
serai_key: Zeroizing<<Ristretto as Ciphersuite>::F>,
serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>,
) {
// Don't spawn retired Tributaries
if crate::db::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >=
@@ -490,7 +490,7 @@ pub(crate) async fn spawn_tributary<P: P2p>(
let mut tributary_validators = Vec::with_capacity(set.validators.len());
for (validator, weight) in set.validators.iter().copied() {
let validator_key = <Ristretto as Ciphersuite>::read_G(&mut validator.0.as_slice())
let validator_key = <Ristretto as GroupIo>::read_G(&mut validator.0.as_slice())
.expect("Serai validator had an invalid public key");
let weight = u64::from(weight);
tributary_validators.push((validator_key, weight));

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -1,7 +1,7 @@
use std::collections::{VecDeque, HashSet};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use ciphersuite::{group::GroupEncoding, *};
use serai_db::{Get, DbTxn, Db};
@@ -21,7 +21,7 @@ pub(crate) struct Blockchain<D: Db, T: TransactionTrait> {
block_number: u64,
tip: [u8; 32],
participants: HashSet<<Ristretto as Ciphersuite>::G>,
participants: HashSet<[u8; 32]>,
provided: ProvidedTransactions<D, T>,
mempool: Mempool<D, T>,
@@ -56,7 +56,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
}
fn next_nonce_key(
genesis: &[u8; 32],
signer: &<Ristretto as Ciphersuite>::G,
signer: &<Ristretto as WrappedGroup>::G,
order: &[u8],
) -> Vec<u8> {
D::key(
@@ -69,12 +69,15 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn new(
db: D,
genesis: [u8; 32],
participants: &[<Ristretto as Ciphersuite>::G],
participants: &[<Ristretto as WrappedGroup>::G],
) -> Self {
let mut res = Self {
db: Some(db.clone()),
genesis,
participants: participants.iter().copied().collect(),
participants: participants
.iter()
.map(<<Ristretto as WrappedGroup>::G as GroupEncoding>::to_bytes)
.collect(),
block_number: 0,
tip: genesis,
@@ -173,7 +176,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
self.mempool.add::<N, _>(
|signer, order| {
if self.participants.contains(&signer) {
if self.participants.contains(&signer.to_bytes()) {
Some(
db.get(Self::next_nonce_key(&self.genesis, &signer, &order))
.map_or(0, |bytes| u32::from_le_bytes(bytes.try_into().unwrap())),
@@ -196,13 +199,13 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn next_nonce(
&self,
signer: &<Ristretto as Ciphersuite>::G,
signer: &<Ristretto as WrappedGroup>::G,
order: &[u8],
) -> Option<u32> {
if let Some(next_nonce) = self.mempool.next_nonce_in_mempool(signer, order.to_vec()) {
return Some(next_nonce);
}
if self.participants.contains(signer) {
if self.participants.contains(&signer.to_bytes()) {
Some(
self
.db
@@ -251,7 +254,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
self.tip,
self.provided.transactions.clone(),
&mut |signer, order| {
if self.participants.contains(signer) {
if self.participants.contains(&signer.to_bytes()) {
let key = Self::next_nonce_key(&self.genesis, signer, order);
let next = txn
.get(&key)

View File

@@ -3,7 +3,7 @@ use std::{sync::Arc, io};
use zeroize::Zeroizing;
use ciphersuite::Ciphersuite;
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use scale::Decode;
@@ -162,8 +162,8 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
db: D,
genesis: [u8; 32],
start_time: u64,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
validators: Vec<(<Ristretto as Ciphersuite>::G, u64)>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
validators: Vec<(<Ristretto as WrappedGroup>::G, u64)>,
p2p: P,
) -> Option<Self> {
log::info!("new Tributary with genesis {}", hex::encode(genesis));
@@ -235,7 +235,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
pub async fn next_nonce(
&self,
signer: &<Ristretto as Ciphersuite>::G,
signer: &<Ristretto as WrappedGroup>::G,
order: &[u8],
) -> Option<u32> {
self.network.blockchain.read().await.next_nonce(signer, order)

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap;
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use ciphersuite::{group::GroupEncoding, *};
use serai_db::{DbTxn, Db};
@@ -21,9 +21,9 @@ pub(crate) struct Mempool<D: Db, T: TransactionTrait> {
db: D,
genesis: [u8; 32],
last_nonce_in_mempool: HashMap<(<Ristretto as Ciphersuite>::G, Vec<u8>), u32>,
last_nonce_in_mempool: HashMap<([u8; 32], Vec<u8>), u32>,
txs: HashMap<[u8; 32], Transaction<T>>,
txs_per_signer: HashMap<<Ristretto as Ciphersuite>::G, u32>,
txs_per_signer: HashMap<[u8; 32], u32>,
}
impl<D: Db, T: TransactionTrait> Mempool<D, T> {
@@ -82,6 +82,7 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
}
Transaction::Application(tx) => match tx.kind() {
TransactionKind::Signed(order, Signed { signer, nonce, .. }) => {
let signer = signer.to_bytes();
let amount = *res.txs_per_signer.get(&signer).unwrap_or(&0) + 1;
res.txs_per_signer.insert(signer, amount);
@@ -107,7 +108,7 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
// Returns Ok(true) if new, Ok(false) if an already present unsigned, or the error.
pub(crate) fn add<
N: Network,
F: FnOnce(<Ristretto as Ciphersuite>::G, Vec<u8>) -> Option<u32>,
F: FnOnce(<Ristretto as WrappedGroup>::G, Vec<u8>) -> Option<u32>,
>(
&mut self,
blockchain_next_nonce: F,
@@ -140,6 +141,8 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
};
let mut next_nonce = blockchain_next_nonce;
let signer = signer.to_bytes();
if let Some(mempool_last_nonce) =
self.last_nonce_in_mempool.get(&(signer, order.clone()))
{
@@ -179,10 +182,10 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
// Returns None if the mempool doesn't have a nonce tracked.
pub(crate) fn next_nonce_in_mempool(
&self,
signer: &<Ristretto as Ciphersuite>::G,
signer: &<Ristretto as WrappedGroup>::G,
order: Vec<u8>,
) -> Option<u32> {
self.last_nonce_in_mempool.get(&(*signer, order)).copied().map(|nonce| nonce + 1)
self.last_nonce_in_mempool.get(&(signer.to_bytes(), order)).copied().map(|nonce| nonce + 1)
}
/// Get transactions to include in a block.
@@ -243,6 +246,8 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
if let Some(tx) = self.txs.remove(tx) {
if let TransactionKind::Signed(order, Signed { signer, nonce, .. }) = tx.kind() {
let signer = signer.to_bytes();
let amount = *self.txs_per_signer.get(&signer).unwrap() - 1;
self.txs_per_signer.insert(signer, amount);

View File

@@ -10,11 +10,8 @@ use rand_chacha::ChaCha12Rng;
use transcript::{Transcript, RecommendedTranscript};
use ciphersuite::{
group::{
GroupEncoding,
ff::{Field, PrimeField},
},
Ciphersuite,
group::{ff::PrimeField, GroupEncoding},
*,
};
use dalek_ff_group::Ristretto;
use schnorr::{
@@ -51,24 +48,26 @@ fn challenge(
key: [u8; 32],
nonce: &[u8],
msg: &[u8],
) -> <Ristretto as Ciphersuite>::F {
) -> <Ristretto as WrappedGroup>::F {
let mut transcript = RecommendedTranscript::new(b"Tributary Chain Tendermint Message");
transcript.append_message(b"genesis", genesis);
transcript.append_message(b"key", key);
transcript.append_message(b"nonce", nonce);
transcript.append_message(b"message", msg);
<Ristretto as Ciphersuite>::F::from_bytes_mod_order_wide(&transcript.challenge(b"schnorr").into())
<Ristretto as WrappedGroup>::F::from_bytes_mod_order_wide(
&transcript.challenge(b"schnorr").into(),
)
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Signer {
genesis: [u8; 32],
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
}
impl Signer {
pub(crate) fn new(genesis: [u8; 32], key: Zeroizing<<Ristretto as Ciphersuite>::F>) -> Signer {
pub(crate) fn new(genesis: [u8; 32], key: Zeroizing<<Ristretto as WrappedGroup>::F>) -> Signer {
Signer { genesis, key }
}
}
@@ -101,10 +100,10 @@ impl SignerTrait for Signer {
assert_eq!(nonce_ref, [0; 64].as_ref());
let nonce =
Zeroizing::new(<Ristretto as Ciphersuite>::F::from_bytes_mod_order_wide(&nonce_arr));
Zeroizing::new(<Ristretto as WrappedGroup>::F::from_bytes_mod_order_wide(&nonce_arr));
nonce_arr.zeroize();
assert!(!bool::from(nonce.ct_eq(&<Ristretto as Ciphersuite>::F::ZERO)));
assert!(!bool::from(nonce.ct_eq(&<Ristretto as WrappedGroup>::F::ZERO)));
let challenge = challenge(
self.genesis,
@@ -133,7 +132,7 @@ pub struct Validators {
impl Validators {
pub(crate) fn new(
genesis: [u8; 32],
validators: Vec<(<Ristretto as Ciphersuite>::G, u64)>,
validators: Vec<(<Ristretto as WrappedGroup>::G, u64)>,
) -> Option<Validators> {
let mut total_weight = 0;
let mut weights = HashMap::new();
@@ -220,7 +219,7 @@ impl SignatureScheme for Validators {
signers
.iter()
.zip(challenges)
.map(|(s, c)| (<Ristretto as Ciphersuite>::read_G(&mut s.as_slice()).unwrap(), c))
.map(|(s, c)| (<Ristretto as GroupIo>::read_G(&mut s.as_slice()).unwrap(), c))
.collect::<Vec<_>>()
.as_slice(),
)

View File

@@ -5,7 +5,7 @@ use scale::{Encode, Decode, IoReader};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use ciphersuite::*;
use crate::{
transaction::{Transaction, TransactionKind, TransactionError},
@@ -50,7 +50,7 @@ impl Transaction for TendermintTx {
Blake2s256::digest(self.serialize()).into()
}
fn sig_hash(&self, _genesis: [u8; 32]) -> <Ristretto as Ciphersuite>::F {
fn sig_hash(&self, _genesis: [u8; 32]) -> <Ristretto as WrappedGroup>::F {
match self {
TendermintTx::SlashEvidence(_) => panic!("sig_hash called on slash evidence transaction"),
}

View File

@@ -3,10 +3,7 @@ use std::{sync::Arc, io, collections::HashMap, fmt::Debug};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite,
};
use ciphersuite::{group::Group, *};
use schnorr::SchnorrSignature;
use serai_db::MemDb;
@@ -32,11 +29,11 @@ impl NonceTransaction {
nonce,
distinguisher,
Signed {
signer: <Ristretto as Ciphersuite>::G::identity(),
signer: <Ristretto as WrappedGroup>::G::identity(),
nonce,
signature: SchnorrSignature::<Ristretto> {
R: <Ristretto as Ciphersuite>::G::identity(),
s: <Ristretto as Ciphersuite>::F::ZERO,
R: <Ristretto as WrappedGroup>::G::identity(),
s: <Ristretto as WrappedGroup>::F::ZERO,
},
},
)

View File

@@ -11,7 +11,7 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use ciphersuite::*;
use serai_db::{DbTxn, Db, MemDb};
@@ -31,7 +31,7 @@ type N = TendermintNetwork<MemDb, SignedTransaction, DummyP2p>;
fn new_blockchain<T: TransactionTrait>(
genesis: [u8; 32],
participants: &[<Ristretto as Ciphersuite>::G],
participants: &[<Ristretto as WrappedGroup>::G],
) -> (MemDb, Blockchain<MemDb, T>) {
let db = MemDb::new();
let blockchain = Blockchain::new(db.clone(), genesis, participants);
@@ -82,7 +82,7 @@ fn invalid_block() {
assert!(blockchain.verify_block::<N>(&block, &validators, false).is_err());
}
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
let tx = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0);
// Not a participant
@@ -134,7 +134,7 @@ fn invalid_block() {
blockchain.verify_block::<N>(&block, &validators, false).unwrap();
match &mut block.transactions[0] {
Transaction::Application(tx) => {
tx.1.signature.s += <Ristretto as Ciphersuite>::F::ONE;
tx.1.signature.s += <Ristretto as WrappedGroup>::F::ONE;
}
_ => panic!("non-signed tx found"),
}
@@ -150,7 +150,7 @@ fn invalid_block() {
fn signed_transaction() {
let genesis = new_genesis();
let validators = Arc::new(Validators::new(genesis, vec![]).unwrap());
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
let tx = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0);
let signer = tx.1.signer;
@@ -339,7 +339,7 @@ fn provided_transaction() {
#[tokio::test]
async fn tendermint_evidence_tx() {
let genesis = new_genesis();
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
let signer = Signer::new(genesis, key.clone());
let signer_id = Ristretto::generator() * key.deref();
let validators = Arc::new(Validators::new(genesis, vec![(signer_id, 1)]).unwrap());
@@ -379,7 +379,7 @@ async fn tendermint_evidence_tx() {
let mut mempool: Vec<Transaction<SignedTransaction>> = vec![];
let mut signers = vec![];
for _ in 0 .. 5 {
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
let signer = Signer::new(genesis, key.clone());
let signer_id = Ristretto::generator() * key.deref();
signers.push((signer_id, 1));
@@ -446,7 +446,7 @@ async fn block_tx_ordering() {
}
let genesis = new_genesis();
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
// signer
let signer = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0).1.signer;

View File

@@ -4,7 +4,7 @@ use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use ciphersuite::*;
use tendermint::ext::Commit;
@@ -33,7 +33,7 @@ async fn mempool_addition() {
Some(Commit::<Arc<Validators>> { end_time: 0, validators: vec![], signature: vec![] })
};
let unsigned_in_chain = |_: [u8; 32]| false;
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
let first_tx = signed_transaction(&mut OsRng, genesis, &key, 0);
let signer = first_tx.1.signer;
@@ -125,7 +125,7 @@ async fn mempool_addition() {
// If the mempool doesn't have a nonce for an account, it should successfully use the
// blockchain's
let second_key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let second_key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
let tx = signed_transaction(&mut OsRng, genesis, &second_key, 2);
let second_signer = tx.1.signer;
assert_eq!(mempool.next_nonce_in_mempool(&second_signer, vec![]), None);
@@ -165,7 +165,7 @@ fn too_many_mempool() {
Some(Commit::<Arc<Validators>> { end_time: 0, validators: vec![], signature: vec![] })
};
let unsigned_in_chain = |_: [u8; 32]| false;
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
// We should be able to add transactions up to the limit
for i in 0 .. ACCOUNT_MEMPOOL_LIMIT {

View File

@@ -7,10 +7,7 @@ use rand::{RngCore, CryptoRng, rngs::OsRng};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite,
};
use ciphersuite::*;
use schnorr::SchnorrSignature;
use scale::Encode;
@@ -34,11 +31,11 @@ mod tendermint;
pub fn random_signed<R: RngCore + CryptoRng>(rng: &mut R) -> Signed {
Signed {
signer: <Ristretto as Ciphersuite>::G::random(&mut *rng),
signer: <Ristretto as WrappedGroup>::G::random(&mut *rng),
nonce: u32::try_from(rng.next_u64() >> 32 >> 1).unwrap(),
signature: SchnorrSignature::<Ristretto> {
R: <Ristretto as Ciphersuite>::G::random(&mut *rng),
s: <Ristretto as Ciphersuite>::F::random(rng),
R: <Ristretto as WrappedGroup>::G::random(&mut *rng),
s: <Ristretto as WrappedGroup>::F::random(rng),
},
}
}
@@ -137,18 +134,18 @@ impl Transaction for SignedTransaction {
pub fn signed_transaction<R: RngCore + CryptoRng>(
rng: &mut R,
genesis: [u8; 32],
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
nonce: u32,
) -> SignedTransaction {
let mut data = vec![0; 512];
rng.fill_bytes(&mut data);
let signer = <Ristretto as Ciphersuite>::generator() * **key;
let signer = <Ristretto as WrappedGroup>::generator() * **key;
let mut tx =
SignedTransaction(data, Signed { signer, nonce, signature: random_signed(rng).signature });
let sig_nonce = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(rng));
let sig_nonce = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(rng));
tx.1.signature.R = Ristretto::generator() * sig_nonce.deref();
tx.1.signature = SchnorrSignature::sign(key, sig_nonce, tx.sig_hash(genesis));
@@ -163,7 +160,7 @@ pub fn random_signed_transaction<R: RngCore + CryptoRng>(
let mut genesis = [0; 32];
rng.fill_bytes(&mut genesis);
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut *rng));
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut *rng));
// Shift over an additional bit to ensure it won't overflow when incremented
let nonce = u32::try_from(rng.next_u64() >> 32 >> 1).unwrap();
@@ -180,12 +177,11 @@ pub async fn tendermint_meta() -> ([u8; 32], Signer, [u8; 32], Arc<Validators>)
// signer
let genesis = new_genesis();
let signer =
Signer::new(genesis, Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng)));
Signer::new(genesis, Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)));
let validator_id = signer.validator_id().await.unwrap();
// schema
let signer_pub =
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut validator_id.as_slice()).unwrap();
let signer_pub = <Ristretto as GroupIo>::read_G::<&[u8]>(&mut validator_id.as_slice()).unwrap();
let validators = Arc::new(Validators::new(genesis, vec![(signer_pub, 1)]).unwrap());
(genesis, signer, validator_id, validators)

View File

@@ -3,7 +3,7 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use ciphersuite::*;
use crate::{
ReadWrite,
@@ -69,7 +69,7 @@ fn signed_transaction() {
}
{
let mut tx = tx.clone();
tx.1.signature.s += <Ristretto as Ciphersuite>::F::ONE;
tx.1.signature.s += <Ristretto as WrappedGroup>::F::ONE;
assert!(verify_transaction(&tx, genesis, &mut |_, _| Some(tx.1.nonce)).is_err());
}

View File

@@ -4,7 +4,7 @@ use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::{Ciphersuite, group::ff::Field};
use ciphersuite::*;
use scale::Encode;
@@ -261,7 +261,7 @@ async fn conflicting_msgs_evidence_tx() {
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await;
let signer_2 =
Signer::new(genesis, Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng)));
Signer::new(genesis, Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)));
let signed_id_2 = signer_2.validator_id().await.unwrap();
let signed_2 = signed_from_data::<N>(
signer_2.into(),
@@ -278,10 +278,9 @@ async fn conflicting_msgs_evidence_tx() {
));
// update schema so that we don't fail due to invalid signature
let signer_pub =
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut signer_id.as_slice()).unwrap();
let signer_pub = <Ristretto as GroupIo>::read_G::<&[u8]>(&mut signer_id.as_slice()).unwrap();
let signer_pub_2 =
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut signed_id_2.as_slice()).unwrap();
<Ristretto as GroupIo>::read_G::<&[u8]>(&mut signed_id_2.as_slice()).unwrap();
let validators =
Arc::new(Validators::new(genesis, vec![(signer_pub, 1), (signer_pub_2, 1)]).unwrap());

View File

@@ -8,7 +8,7 @@ use blake2::{Digest, Blake2b512};
use ciphersuite::{
group::{Group, GroupEncoding},
Ciphersuite,
*,
};
use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature;
@@ -43,7 +43,7 @@ pub enum TransactionError {
/// Data for a signed transaction.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Signed {
pub signer: <Ristretto as Ciphersuite>::G,
pub signer: <Ristretto as WrappedGroup>::G,
pub nonce: u32,
pub signature: SchnorrSignature<Ristretto>,
}
@@ -160,10 +160,10 @@ pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite {
/// Do not override this unless you know what you're doing.
///
/// Panics if called on non-signed transactions.
fn sig_hash(&self, genesis: [u8; 32]) -> <Ristretto as Ciphersuite>::F {
fn sig_hash(&self, genesis: [u8; 32]) -> <Ristretto as WrappedGroup>::F {
match self.kind() {
TransactionKind::Signed(order, Signed { signature, .. }) => {
<Ristretto as Ciphersuite>::F::from_bytes_mod_order_wide(
<Ristretto as WrappedGroup>::F::from_bytes_mod_order_wide(
&Blake2b512::digest(
[
b"Tributary Signed Transaction",
@@ -182,8 +182,8 @@ pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite {
}
}
pub trait GAIN: FnMut(&<Ristretto as Ciphersuite>::G, &[u8]) -> Option<u32> {}
impl<F: FnMut(&<Ristretto as Ciphersuite>::G, &[u8]) -> Option<u32>> GAIN for F {}
pub trait GAIN: FnMut(&<Ristretto as WrappedGroup>::G, &[u8]) -> Option<u32> {}
impl<F: FnMut(&<Ristretto as WrappedGroup>::G, &[u8]) -> Option<u32>> GAIN for F {}
pub(crate) fn verify_transaction<F: GAIN, T: Transaction>(
tx: &T,

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug;
use std::{

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use std::collections::HashMap;
use scale::Encode;

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -6,8 +6,8 @@ use rand_core::{RngCore, CryptoRng};
use blake2::{digest::typenum::U32, Digest, Blake2b};
use ciphersuite::{
group::{ff::Field, Group, GroupEncoding},
Ciphersuite,
group::{Group, GroupEncoding},
*,
};
use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature;
@@ -52,7 +52,7 @@ impl SigningProtocolRound {
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct Signed {
/// The signer.
signer: <Ristretto as Ciphersuite>::G,
signer: <Ristretto as WrappedGroup>::G,
/// The signature.
signature: SchnorrSignature<Ristretto>,
}
@@ -73,7 +73,7 @@ impl BorshDeserialize for Signed {
impl Signed {
/// Fetch the signer.
pub(crate) fn signer(&self) -> <Ristretto as Ciphersuite>::G {
pub(crate) fn signer(&self) -> <Ristretto as WrappedGroup>::G {
self.signer
}
@@ -86,10 +86,10 @@ impl Signed {
impl Default for Signed {
fn default() -> Self {
Self {
signer: <Ristretto as Ciphersuite>::G::identity(),
signer: <Ristretto as WrappedGroup>::G::identity(),
signature: SchnorrSignature {
R: <Ristretto as Ciphersuite>::G::identity(),
s: <Ristretto as Ciphersuite>::F::ZERO,
R: <Ristretto as WrappedGroup>::G::identity(),
s: <Ristretto as WrappedGroup>::F::ZERO,
},
}
}
@@ -356,7 +356,7 @@ impl Transaction {
&mut self,
rng: &mut R,
genesis: [u8; 32],
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
) {
fn signed(tx: &mut Transaction) -> &mut Signed {
#[allow(clippy::match_same_arms)] // This doesn't make semantic sense here
@@ -380,13 +380,13 @@ impl Transaction {
}
// Decide the nonce to sign with
let sig_nonce = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(rng));
let sig_nonce = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(rng));
{
// Set the signer and the nonce
let signed = signed(self);
signed.signer = Ristretto::generator() * key.deref();
signed.signature.R = <Ristretto as Ciphersuite>::generator() * sig_nonce.deref();
signed.signature.R = <Ristretto as WrappedGroup>::generator() * sig_nonce.deref();
}
// Get the signature hash (which now includes `R || A` making it valid as the challenge)

View File

@@ -17,15 +17,12 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
std-shims = { path = "../../common/std-shims", version = "^0.1.1", default-features = false, optional = true }
rand_core = { version = "0.6", default-features = false }
std-shims = { path = "../../common/std-shims", version = "0.1.4", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
subtle = { version = "^2.4", default-features = false }
digest = { version = "0.11.0-rc.0", default-features = false, features = ["block-api"] }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false }
digest = { version = "0.11.0-rc.1", default-features = false }
ff = { version = "0.13", default-features = false, features = ["bits"] }
group = { version = "0.13", default-features = false }
@@ -33,24 +30,18 @@ group = { version = "0.13", default-features = false }
[dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../ff-group-tests" }
[features]
alloc = ["std-shims", "digest/alloc", "ff/alloc"]
alloc = ["zeroize/alloc", "digest/alloc", "ff/alloc"]
std = [
"alloc",
"std-shims/std",
"rand_core/std",
"zeroize/std",
"subtle/std",
"transcript/std",
"ff/std",
]

View File

@@ -21,7 +21,7 @@ rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
sha2 = { version = "0.11.0-rc.0", default-features = false }
sha2 = { version = "0.11.0-rc.2", default-features = false }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }

View File

@@ -1,11 +1,11 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
use zeroize::Zeroize;
use sha2::Sha512;
use ciphersuite::Ciphersuite;
use ciphersuite::{WrappedGroup, Id, WithPreferredHash, GroupCanonicalEncoding};
pub use k256;
pub use p256;
@@ -18,17 +18,20 @@ macro_rules! kp_curve {
$Ciphersuite: ident,
$ID: literal
) => {
impl Ciphersuite for $Ciphersuite {
impl WrappedGroup for $Ciphersuite {
type F = $lib::Scalar;
type G = $lib::ProjectivePoint;
type H = Sha512;
const ID: &'static [u8] = $ID;
fn generator() -> Self::G {
$lib::ProjectivePoint::GENERATOR
}
}
impl Id for $Ciphersuite {
const ID: &'static [u8] = $ID;
}
impl WithPreferredHash for $Ciphersuite {
type H = Sha512;
}
impl GroupCanonicalEncoding for $Ciphersuite {}
};
}

View File

@@ -1,30 +1,24 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("lib.md")]
#![cfg_attr(not(feature = "std"), no_std)]
use core::fmt::Debug;
#[cfg(feature = "alloc")]
#[allow(unused_imports)]
use std_shims::prelude::*;
#[cfg(feature = "alloc")]
use std_shims::io::{self, Read};
use rand_core::{RngCore, CryptoRng};
use subtle::{CtOption, ConstantTimeEq, ConditionallySelectable};
use zeroize::Zeroize;
use subtle::ConstantTimeEq;
pub use digest;
use digest::{array::ArraySize, block_api::BlockSizeUser, OutputSizeUser, Digest, HashMarker};
use transcript::SecureDigest;
use digest::{array::ArraySize, OutputSizeUser, Digest, HashMarker};
pub use group;
use group::{
ff::{Field, PrimeField, PrimeFieldBits},
ff::{PrimeField, PrimeFieldBits},
Group, GroupOps,
prime::PrimeGroup,
};
#[cfg(feature = "alloc")]
use group::GroupEncoding;
pub trait FromUniformBytes<T> {
@@ -36,74 +30,115 @@ impl<const N: usize, F: group::ff::FromUniformBytes<N>> FromUniformBytes<[u8; N]
}
}
/// Unified trait defining a ciphersuite around an elliptic curve.
pub trait Ciphersuite:
/// A marker trait for fields which fleshes them out a bit more.
pub trait F: PrimeField + PrimeFieldBits + Zeroize {}
impl<Fi: PrimeField + PrimeFieldBits + Zeroize> F for Fi {}
/// A marker trait for groups which fleshes them out a bit more.
pub trait G:
Group + GroupOps + GroupEncoding + PrimeGroup + ConstantTimeEq + ConditionallySelectable + Zeroize
{
}
impl<
Gr: Group
+ GroupOps
+ GroupEncoding
+ PrimeGroup
+ ConstantTimeEq
+ ConditionallySelectable
+ Zeroize,
> G for Gr
{
}
/// A `Group` type which has been wrapped into the current type.
///
/// This avoids having to re-implement all of the `Group` traits on the wrapper.
// TODO: Remove these bounds
pub trait WrappedGroup:
'static + Send + Sync + Clone + Copy + PartialEq + Eq + Debug + Zeroize
{
/// Scalar field element type.
// This is available via G::Scalar yet `C::G::Scalar` is ambiguous, forcing horrific accesses
type F: PrimeField
+ PrimeFieldBits
+ Zeroize
+ FromUniformBytes<<<Self::H as OutputSizeUser>::OutputSize as ArraySize>::ArrayType<u8>>;
// This is available via `G::Scalar` yet `WG::G::Scalar` is ambiguous, forcing horrific accesses
type F: F;
/// Group element type.
type G: Group<Scalar = Self::F> + GroupOps + PrimeGroup + Zeroize + ConstantTimeEq;
/// Hash algorithm used with this curve.
// Requires BlockSizeUser so it can be used within Hkdf which requires that.
type H: Send + Clone + BlockSizeUser + Digest + HashMarker + SecureDigest;
/// ID for this curve.
const ID: &'static [u8];
type G: Group<Scalar = Self::F> + G;
/// Generator for the group.
// While group does provide this in its API, privacy coins may want to use a custom basepoint
fn generator() -> Self::G;
}
impl<Gr: G<Scalar: F>> WrappedGroup for Gr {
type F = <Gr as Group>::Scalar;
type G = Gr;
fn generator() -> Self::G {
<Self::G as Group>::generator()
}
}
/// An ID for an object.
pub trait Id {
// The ID.
const ID: &'static [u8];
}
/// A group with a preferred hash function.
pub trait WithPreferredHash:
WrappedGroup<
F: FromUniformBytes<<<Self::H as OutputSizeUser>::OutputSize as ArraySize>::ArrayType<u8>>,
>
{
type H: Send + Clone + Digest + HashMarker;
#[allow(non_snake_case)]
fn hash_to_F(data: &[u8]) -> Self::F {
Self::F::from_uniform_bytes(&Self::H::digest(data).into())
}
}
/// Generate a random non-zero scalar.
#[allow(non_snake_case)]
fn random_nonzero_F<R: RngCore + CryptoRng>(rng: &mut R) -> Self::F {
let mut res;
while {
res = Self::F::random(&mut *rng);
res.ct_eq(&Self::F::ZERO).into()
} {}
res
}
/// Read a canonical scalar from something implementing std::io::Read.
#[cfg(feature = "alloc")]
#[allow(non_snake_case)]
fn read_F<R: Read>(reader: &mut R) -> io::Result<Self::F> {
let mut encoding = <Self::F as PrimeField>::Repr::default();
reader.read_exact(encoding.as_mut())?;
// ff mandates this is canonical
let res = Option::<Self::F>::from(Self::F::from_repr(encoding))
.ok_or_else(|| io::Error::other("non-canonical scalar"));
encoding.as_mut().zeroize();
res
}
/// Read a canonical point from something implementing std::io::Read.
/// A group which always encodes points canonically and supports decoding points while checking
/// they have a canonical encoding.
pub trait GroupCanonicalEncoding: WrappedGroup {
/// Decode a point from its canonical encoding.
///
/// The provided implementation is safe so long as `GroupEncoding::to_bytes` always returns a
/// canonical serialization.
#[cfg(feature = "alloc")]
#[allow(non_snake_case)]
fn read_G<R: Read>(reader: &mut R) -> io::Result<Self::G> {
let mut encoding = <Self::G as GroupEncoding>::Repr::default();
reader.read_exact(encoding.as_mut())?;
let point = Option::<Self::G>::from(Self::G::from_bytes(&encoding))
.ok_or_else(|| io::Error::other("invalid point"))?;
if point.to_bytes().as_ref() != encoding.as_ref() {
Err(io::Error::other("non-canonical point"))?;
}
Ok(point)
/// Returns `None` if the point was invalid or not the encoding wasn't canonical.
///
/// If `<Self::G as GroupEncoding>::from_bytes` already only accepts canonical encodings, this
/// SHOULD be overriden with `<Self::G as GroupEncoding>::from_bytes(bytes)`.
fn from_canonical_bytes(bytes: &<Self::G as GroupEncoding>::Repr) -> CtOption<Self::G> {
let res = Self::G::from_bytes(bytes).unwrap_or(Self::generator());
// Safe due to the bound points are always encoded canonically
let canonical = res.to_bytes().as_ref().ct_eq(bytes.as_ref());
CtOption::new(res, canonical)
}
}
/// `std::io` extensions for `GroupCanonicalEncoding.`
#[allow(non_snake_case)]
pub trait GroupIo: GroupCanonicalEncoding {
/// Read a canonical field element from something implementing `std::io::Read`.
fn read_F<R: Read>(reader: &mut R) -> io::Result<Self::F> {
let mut bytes = <Self::F as PrimeField>::Repr::default();
reader.read_exact(bytes.as_mut())?;
// `ff` mandates this is canonical
let res = Option::<Self::F>::from(Self::F::from_repr(bytes))
.ok_or_else(|| io::Error::other("non-canonical scalar"));
bytes.as_mut().zeroize();
res
}
/// Read a canonical point from something implementing `std::io::Read`.
fn read_G<R: Read>(reader: &mut R) -> io::Result<Self::G> {
let mut bytes = <Self::G as GroupEncoding>::Repr::default();
reader.read_exact(bytes.as_mut())?;
let res = Option::<Self::G>::from(Self::from_canonical_bytes(&bytes))
.ok_or_else(|| io::Error::other("invalid point"))?;
bytes.as_mut().zeroize();
Ok(res)
}
}
impl<Gr: GroupCanonicalEncoding> GroupIo for Gr {}
/// Unified trait defining a ciphersuite around an elliptic curve.
pub trait Ciphersuite: Id + WithPreferredHash + GroupCanonicalEncoding {}
impl<C: Id + WithPreferredHash + GroupCanonicalEncoding> Ciphersuite for C {}

View File

@@ -1,6 +1,6 @@
[package]
name = "dalek-ff-group"
version = "0.4.6"
version = "0.5.0"
description = "ff/group bindings around curve25519-dalek"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dalek-ff-group"
@@ -22,15 +22,13 @@ subtle = { version = "^2.4", default-features = false }
rand_core = { version = "0.6", default-features = false }
digest = { version = "0.10", default-features = false }
sha2 = { version = "0.11.0-rc.0", default-features = false }
sha2 = { version = "0.11.0-rc.2", default-features = false, features = ["zeroize"] }
blake2 = { version = "0.11.0-rc.2", default-features = false, features = ["zeroize"] }
prime-field = { path = "../prime-field", default-features = false }
ciphersuite = { version = "0.4.2", path = "../ciphersuite", default-features = false }
crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] }
curve25519-dalek = { version = ">= 4.0, < 4.2", default-features = false, features = ["zeroize", "digest", "group", "precomputed-tables"] }
curve25519-dalek = { version = ">= 4.0, < 4.2", default-features = false, features = ["zeroize", "digest", "group-bits", "precomputed-tables"] }
[dev-dependencies]
hex = "0.4"
@@ -38,6 +36,6 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { path = "../ff-group-tests" }
[features]
alloc = ["zeroize/alloc", "digest/alloc", "prime-field/alloc", "ciphersuite/alloc", "curve25519-dalek/alloc"]
std = ["alloc", "zeroize/std", "subtle/std", "rand_core/std", "digest/std", "prime-field/std", "ciphersuite/std"]
alloc = ["zeroize/alloc", "prime-field/alloc", "ciphersuite/alloc", "curve25519-dalek/alloc"]
std = ["alloc", "zeroize/std", "subtle/std", "rand_core/std", "prime-field/std", "ciphersuite/std"]
default = ["std"]

View File

@@ -1,49 +1,48 @@
use zeroize::Zeroize;
use sha2::Sha512;
use blake2::Blake2b512;
use ciphersuite::{group::Group, Ciphersuite};
use ::ciphersuite::{group::Group, *};
use crate::Scalar;
macro_rules! dalek_curve {
(
$feature: literal,
$Ciphersuite: ident,
$Point: ident,
$ID: literal
) => {
use crate::$Point;
impl Ciphersuite for $Ciphersuite {
type F = Scalar;
type G = $Point;
type H = Sha512;
const ID: &'static [u8] = $ID;
fn generator() -> Self::G {
$Point::generator()
}
}
};
}
use crate::*;
/// Ciphersuite for Ristretto.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ristretto;
dalek_curve!("ristretto", Ristretto, RistrettoPoint, b"ristretto");
#[test]
fn test_ristretto() {
ff_group_tests::group::test_prime_group_bits::<_, RistrettoPoint>(&mut rand_core::OsRng);
impl WrappedGroup for Ristretto {
type F = Scalar;
type G = RistrettoPoint;
fn generator() -> Self::G {
<RistrettoPoint as Group>::generator()
}
}
impl Id for Ristretto {
const ID: &[u8] = b"ristretto";
}
impl WithPreferredHash for Ristretto {
type H = Blake2b512;
}
impl GroupCanonicalEncoding for Ristretto {
fn from_canonical_bytes(bytes: &<Self::G as GroupEncoding>::Repr) -> CtOption<Self::G> {
Self::G::from_bytes(bytes)
}
}
/// Ciphersuite for Ed25519, inspired by RFC-8032.
/// Ciphersuite for Ed25519.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ed25519;
dalek_curve!("ed25519", Ed25519, EdwardsPoint, b"edwards25519");
#[test]
fn test_ed25519() {
ff_group_tests::group::test_prime_group_bits::<_, EdwardsPoint>(&mut rand_core::OsRng);
impl WrappedGroup for Ed25519 {
type F = Scalar;
type G = EdwardsPoint;
fn generator() -> Self::G {
<EdwardsPoint as Group>::generator()
}
}
impl Id for Ed25519 {
const ID: &[u8] = b"ed25519";
}
impl WithPreferredHash for Ed25519 {
type H = Sha512;
}
impl GroupCanonicalEncoding for Ed25519 {}

View File

@@ -1,5 +1,5 @@
#![allow(deprecated)]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![no_std] // Prevents writing new code, in what should be a simple wrapper, which requires std
#![doc = include_str!("../README.md")]
#![allow(clippy::redundant_closure_call)]
@@ -7,33 +7,22 @@
use core::{
borrow::Borrow,
ops::{Deref, Add, AddAssign, Sub, SubAssign, Neg, Mul, MulAssign},
iter::{Iterator, Sum, Product},
hash::{Hash, Hasher},
iter::{Iterator, Sum},
};
use zeroize::Zeroize;
use subtle::{ConstantTimeEq, ConditionallySelectable};
use rand_core::RngCore;
use digest::{consts::U64, Digest, HashMarker};
use subtle::{Choice, CtOption};
pub use curve25519_dalek as dalek;
use dalek::{
constants::{self, BASEPOINT_ORDER},
scalar::Scalar as DScalar,
edwards::{EdwardsPoint as DEdwardsPoint, EdwardsBasepointTable, CompressedEdwardsY},
ristretto::{RistrettoPoint as DRistrettoPoint, RistrettoBasepointTable, CompressedRistretto},
use curve25519_dalek::{
edwards::{EdwardsPoint as DEdwardsPoint, CompressedEdwardsY},
};
pub use constants::{ED25519_BASEPOINT_TABLE, RISTRETTO_BASEPOINT_TABLE};
pub use curve25519_dalek::{Scalar, ristretto::RistrettoPoint};
use ::ciphersuite::group::{
ff::{Field, PrimeField, FieldBits, PrimeFieldBits, FromUniformBytes},
Group, GroupEncoding,
prime::PrimeGroup,
};
use ::ciphersuite::group::{Group, GroupEncoding, prime::PrimeGroup};
mod ciphersuite;
pub use crate::ciphersuite::{Ed25519, Ristretto};
@@ -97,7 +86,41 @@ macro_rules! constant_time {
}
};
}
pub(crate) use constant_time;
macro_rules! math_op_without_wrapping {
(
$Value: ident,
$Other: ident,
$Op: ident,
$op_fn: ident,
$Assign: ident,
$assign_fn: ident,
$function: expr
) => {
impl $Op<$Other> for $Value {
type Output = $Value;
fn $op_fn(self, other: $Other) -> Self::Output {
Self($function(self.0, other))
}
}
impl $Assign<$Other> for $Value {
fn $assign_fn(&mut self, other: $Other) {
self.0 = $function(self.0, other);
}
}
impl<'a> $Op<&'a $Other> for $Value {
type Output = $Value;
fn $op_fn(self, other: &'a $Other) -> Self::Output {
Self($function(self.0, other))
}
}
impl<'a> $Assign<&'a $Other> for $Value {
fn $assign_fn(&mut self, other: &'a $Other) {
self.0 = $function(self.0, other);
}
}
};
}
macro_rules! math_op {
(
@@ -133,20 +156,12 @@ macro_rules! math_op {
}
};
}
pub(crate) use math_op;
macro_rules! math {
($Value: ident, $Factor: ident, $add: expr, $sub: expr, $mul: expr) => {
math_op!($Value, $Value, Add, add, AddAssign, add_assign, $add);
math_op!($Value, $Value, Sub, sub, SubAssign, sub_assign, $sub);
math_op!($Value, $Factor, Mul, mul, MulAssign, mul_assign, $mul);
};
}
pub(crate) use math;
macro_rules! math_neg {
($Value: ident, $Factor: ident, $add: expr, $sub: expr, $mul: expr) => {
math!($Value, $Factor, $add, $sub, $mul);
math_op!($Value, $Value, Add, add, AddAssign, add_assign, $add);
math_op!($Value, $Value, Sub, sub, SubAssign, sub_assign, $sub);
math_op_without_wrapping!($Value, $Factor, Mul, mul, MulAssign, mul_assign, $mul);
impl Neg for $Value {
type Output = Self;
@@ -157,187 +172,6 @@ macro_rules! math_neg {
};
}
/// Wrapper around the dalek Scalar type.
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug, Zeroize)]
pub struct Scalar(pub DScalar);
deref_borrow!(Scalar, DScalar);
constant_time!(Scalar, DScalar);
math_neg!(Scalar, Scalar, DScalar::add, DScalar::sub, DScalar::mul);
macro_rules! from_wrapper {
($uint: ident) => {
impl From<$uint> for Scalar {
fn from(a: $uint) -> Scalar {
Scalar(DScalar::from(a))
}
}
};
}
from_wrapper!(u8);
from_wrapper!(u16);
from_wrapper!(u32);
from_wrapper!(u64);
from_wrapper!(u128);
impl Scalar {
pub fn pow(&self, other: Scalar) -> Scalar {
let mut table = [Scalar::ONE; 16];
table[1] = *self;
for i in 2 .. 16 {
table[i] = table[i - 1] * self;
}
let mut res = Scalar::ONE;
let mut bits = 0;
for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() {
bits <<= 1;
let mut bit = u8_from_bool(&mut bit);
bits |= bit;
bit.zeroize();
if ((i + 1) % 4) == 0 {
if i != 3 {
for _ in 0 .. 4 {
res *= res;
}
}
let mut scale_by = Scalar::ONE;
#[allow(clippy::needless_range_loop)]
for i in 0 .. 16 {
#[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16
{
scale_by = <_>::conditional_select(&scale_by, &table[i], bits.ct_eq(&(i as u8)));
}
}
res *= scale_by;
bits = 0;
}
}
res
}
/// Perform wide reduction on a 64-byte array to create a Scalar without bias.
pub fn from_bytes_mod_order_wide(bytes: &[u8; 64]) -> Scalar {
Self(DScalar::from_bytes_mod_order_wide(bytes))
}
/// Derive a Scalar without bias from a digest via wide reduction.
pub fn from_hash<D: Digest<OutputSize = U64> + HashMarker>(hash: D) -> Scalar {
let mut output = [0u8; 64];
output.copy_from_slice(&hash.finalize());
let res = Scalar(DScalar::from_bytes_mod_order_wide(&output));
output.zeroize();
res
}
}
impl Field for Scalar {
const ZERO: Scalar = Scalar(DScalar::ZERO);
const ONE: Scalar = Scalar(DScalar::ONE);
fn random(rng: impl RngCore) -> Self {
Self(<DScalar as Field>::random(rng))
}
fn square(&self) -> Self {
Self(self.0.square())
}
fn double(&self) -> Self {
Self(self.0.double())
}
fn invert(&self) -> CtOption<Self> {
<DScalar as Field>::invert(&self.0).map(Self)
}
fn sqrt(&self) -> CtOption<Self> {
self.0.sqrt().map(Self)
}
fn sqrt_ratio(num: &Self, div: &Self) -> (Choice, Self) {
let (choice, res) = DScalar::sqrt_ratio(num, div);
(choice, Self(res))
}
}
impl PrimeField for Scalar {
type Repr = [u8; 32];
const MODULUS: &'static str = <DScalar as PrimeField>::MODULUS;
const NUM_BITS: u32 = <DScalar as PrimeField>::NUM_BITS;
const CAPACITY: u32 = <DScalar as PrimeField>::CAPACITY;
const TWO_INV: Scalar = Scalar(<DScalar as PrimeField>::TWO_INV);
const MULTIPLICATIVE_GENERATOR: Scalar =
Scalar(<DScalar as PrimeField>::MULTIPLICATIVE_GENERATOR);
const S: u32 = <DScalar as PrimeField>::S;
const ROOT_OF_UNITY: Scalar = Scalar(<DScalar as PrimeField>::ROOT_OF_UNITY);
const ROOT_OF_UNITY_INV: Scalar = Scalar(<DScalar as PrimeField>::ROOT_OF_UNITY_INV);
const DELTA: Scalar = Scalar(<DScalar as PrimeField>::DELTA);
fn from_repr(bytes: [u8; 32]) -> CtOption<Self> {
<DScalar as PrimeField>::from_repr(bytes).map(Scalar)
}
fn to_repr(&self) -> [u8; 32] {
self.0.to_repr()
}
fn is_odd(&self) -> Choice {
self.0.is_odd()
}
fn from_u128(num: u128) -> Self {
Scalar(DScalar::from_u128(num))
}
}
impl PrimeFieldBits for Scalar {
type ReprBits = [u8; 32];
fn to_le_bits(&self) -> FieldBits<Self::ReprBits> {
self.to_repr().into()
}
fn char_le_bits() -> FieldBits<Self::ReprBits> {
BASEPOINT_ORDER.to_bytes().into()
}
}
impl FromUniformBytes<64> for Scalar {
fn from_uniform_bytes(bytes: &[u8; 64]) -> Self {
Self::from_bytes_mod_order_wide(bytes)
}
}
impl Sum<Scalar> for Scalar {
fn sum<I: Iterator<Item = Scalar>>(iter: I) -> Scalar {
Self(DScalar::sum(iter))
}
}
impl<'a> Sum<&'a Scalar> for Scalar {
fn sum<I: Iterator<Item = &'a Scalar>>(iter: I) -> Scalar {
Self(DScalar::sum(iter))
}
}
impl Product<Scalar> for Scalar {
fn product<I: Iterator<Item = Scalar>>(iter: I) -> Scalar {
Self(DScalar::product(iter))
}
}
impl<'a> Product<&'a Scalar> for Scalar {
fn product<I: Iterator<Item = &'a Scalar>>(iter: I) -> Scalar {
Self(DScalar::product(iter))
}
}
macro_rules! dalek_group {
(
$Point: ident,
@@ -347,9 +181,6 @@ macro_rules! dalek_group {
$Table: ident,
$DCompressed: ident,
$BASEPOINT_POINT: ident,
$BASEPOINT_TABLE: ident
) => {
/// Wrapper around the dalek Point type.
///
@@ -363,9 +194,6 @@ macro_rules! dalek_group {
constant_time!($Point, $DPoint);
math_neg!($Point, Scalar, $DPoint::add, $DPoint::sub, $DPoint::mul);
/// The basepoint for this curve.
pub const $BASEPOINT_POINT: $Point = $Point(constants::$BASEPOINT_POINT);
impl Sum<$Point> for $Point {
fn sum<I: Iterator<Item = $Point>>(iter: I) -> $Point {
Self($DPoint::sum(iter))
@@ -396,7 +224,7 @@ macro_rules! dalek_group {
Self($DPoint::identity())
}
fn generator() -> Self {
$BASEPOINT_POINT
Self(<$DPoint as Group>::generator())
}
fn is_identity(&self) -> Choice {
self.0.ct_eq(&$DPoint::identity())
@@ -429,24 +257,6 @@ macro_rules! dalek_group {
}
impl PrimeGroup for $Point {}
impl Mul<Scalar> for &$Table {
type Output = $Point;
fn mul(self, b: Scalar) -> $Point {
$Point(&b.0 * self)
}
}
// Support being used as a key in a table
// While it is expensive as a key, due to the field operations required, there's frequently
// use cases for public key -> value lookups
#[allow(unknown_lints, renamed_and_removed_lints)]
#[allow(clippy::derived_hash_with_manual_eq, clippy::derive_hash_xor_eq)]
impl Hash for $Point {
fn hash<H: Hasher>(&self, state: &mut H) {
self.to_bytes().hash(state);
}
}
};
}
@@ -456,24 +266,6 @@ dalek_group!(
|point: DEdwardsPoint| point.is_torsion_free(),
EdwardsBasepointTable,
CompressedEdwardsY,
ED25519_BASEPOINT_POINT,
ED25519_BASEPOINT_TABLE
);
impl EdwardsPoint {
pub fn mul_by_cofactor(&self) -> EdwardsPoint {
EdwardsPoint(self.0.mul_by_cofactor())
}
}
dalek_group!(
RistrettoPoint,
DRistrettoPoint,
|_| true,
RistrettoBasepointTable,
CompressedRistretto,
RISTRETTO_BASEPOINT_POINT,
RISTRETTO_BASEPOINT_TABLE
);
#[test]
@@ -494,21 +286,3 @@ prime_field::odd_prime_field_with_specific_repr!(
false,
crate::ThirtyTwoArray
);
impl FieldElement {
/// Create a FieldElement from a `crypto_bigint::U256`.
///
/// This will reduce the `U256` by the modulus, into a member of the field.
#[deprecated]
pub const fn from_u256(u256: &crypto_bigint::U256) -> Self {
FieldElement::from(&prime_field::crypto_bigint::U256::from_words(*u256.as_words()))
}
/// Create a `FieldElement` from the reduction of a 512-bit number.
///
/// The bytes are interpreted in little-endian format.
#[deprecated]
pub fn wide_reduce(value: [u8; 64]) -> Self {
<FieldElement as ::ciphersuite::group::ff::FromUniformBytes<_>>::from_uniform_bytes(&value)
}
}

View File

@@ -21,21 +21,14 @@ zeroize = { version = "^1.5", default-features = false, features = ["zeroize_der
thiserror = { version = "2", default-features = false }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false }
borsh = { version = "1", default-features = false, features = ["derive", "de_strict_order"], optional = true }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false, features = ["alloc"] }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] }
[features]
std = [
"thiserror/std",
"std-shims/std",
"borsh?/std",
"ciphersuite/std",
]
borsh = ["dep:borsh"]
default = ["std"]

View File

@@ -20,7 +20,7 @@ workspace = true
zeroize = { version = "^1.5", default-features = false }
rand_core = { version = "0.6", default-features = false }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false, features = ["alloc"] }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }
dkg = { path = "../", version = "0.6", default-features = false }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![no_std]
@@ -10,12 +10,12 @@ use rand_core::{RngCore, CryptoRng};
use ciphersuite::{
group::ff::{Field, PrimeField},
Ciphersuite,
GroupIo, Id,
};
pub use dkg::*;
/// Create a key via a dealer key generation protocol.
pub fn key_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
pub fn key_gen<R: RngCore + CryptoRng, C: GroupIo + Id>(
rng: &mut R,
threshold: u16,
participants: u16,

View File

@@ -23,7 +23,7 @@ rand_core = { version = "0.6", default-features = false, features = ["alloc"] }
zeroize = { version = "^1.5", default-features = false, features = ["alloc", "zeroize_derive"] }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false, features = ["alloc"] }
transcript = { package = "flexible-transcript", path = "../../transcript", version = "^0.3.2", default-features = false, features = ["recommended"] }
@@ -31,13 +31,13 @@ ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features
multiexp = { path = "../../multiexp", version = "0.4", default-features = false }
generic-array = { version = "1", default-features = false, features = ["alloc"] }
blake2 = { version = "0.11.0-rc.0", default-features = false }
blake2 = { version = "0.11.0-rc.2", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
generalized-bulletproofs = { git = "https://github.com/monero-oxide/monero-oxide", rev = "a6f8797007e768488568b821435cf5006517a962", default-features = false }
ec-divisors = { git = "https://github.com/monero-oxide/monero-oxide", rev = "a6f8797007e768488568b821435cf5006517a962", default-features = false }
generalized-bulletproofs-circuit-abstraction = { git = "https://github.com/monero-oxide/monero-oxide", rev = "a6f8797007e768488568b821435cf5006517a962", default-features = false }
generalized-bulletproofs-ec-gadgets = { git = "https://github.com/monero-oxide/monero-oxide", rev = "a6f8797007e768488568b821435cf5006517a962", default-features = false }
generalized-bulletproofs = { git = "https://github.com/monero-oxide/monero-oxide", rev = "dc1b3dbe436aae61ec363505052d4715d38ce1df", default-features = false }
ec-divisors = { git = "https://github.com/monero-oxide/monero-oxide", rev = "dc1b3dbe436aae61ec363505052d4715d38ce1df", default-features = false }
generalized-bulletproofs-circuit-abstraction = { git = "https://github.com/monero-oxide/monero-oxide", rev = "dc1b3dbe436aae61ec363505052d4715d38ce1df", default-features = false }
generalized-bulletproofs-ec-gadgets = { git = "https://github.com/monero-oxide/monero-oxide", rev = "dc1b3dbe436aae61ec363505052d4715d38ce1df", default-features = false }
dkg = { path = "..", default-features = false }
@@ -52,7 +52,7 @@ rand = { version = "0.8", default-features = false, features = ["std"] }
ciphersuite = { path = "../../ciphersuite", default-features = false, features = ["std"] }
embedwards25519 = { path = "../../embedwards25519", default-features = false, features = ["std"] }
dalek-ff-group = { path = "../../dalek-ff-group", default-features = false, features = ["std"] }
generalized-bulletproofs = { git = "https://github.com/monero-oxide/monero-oxide", rev = "a6f8797007e768488568b821435cf5006517a962", features = ["tests"] }
generalized-bulletproofs = { git = "https://github.com/monero-oxide/monero-oxide", rev = "dc1b3dbe436aae61ec363505052d4715d38ce1df", features = ["tests"] }
dkg-recovery = { path = "../recovery" }
[features]
@@ -86,6 +86,5 @@ std = [
]
secp256k1 = ["ciphersuite-kp256", "secq256k1"]
ed25519 = ["dalek-ff-group", "embedwards25519"]
ristretto = ["dalek-ff-group", "embedwards25519"]
tests = ["rand_core/getrandom"]
default = ["std"]

View File

@@ -26,21 +26,9 @@ presented in section 4.2 is extended, with the following changes:
just one round.
For a gist of the verifiable encryption scheme, please see
https://gist.github.com/kayabaNerve/cfbde74b0660dfdf8dd55326d6ec33d7. Security
proofs are currently being worked on.
---
This library relies on an implementation of Bulletproofs and various
zero-knowledge gadgets. This library uses
[`generalized-bulletproofs`](https://docs.rs/generalized-bulletproofs),
[`generalized-bulletproofs-circuit-abstraction`](https://docs.rs/generalized-bulletproofs-circuit-abstraction),
and
[`generalized-bulletproofs-ec-gadgets`](https://docs.rs/generalized-bulletproofs-ec-gadgets)
from the Monero project's FCMP++ codebase. These libraries have received the
following audits in the past:
- https://github.com/kayabaNerve/monero-oxide/tree/fcmp++/audits/generalized-bulletproofs
- https://github.com/kayabaNerve/monero-oxide/tree/fcmp++/audits/fcmps
https://gist.github.com/kayabaNerve/cfbde74b0660dfdf8dd55326d6ec33d7. For
security proofs and audit information, please see
[here](../../../audits/crypto/dkg/evrf).
---

View File

@@ -17,7 +17,7 @@ type Blake2s256Keyed = Blake2sMac<U32>;
use ciphersuite::{
group::{ff::FromUniformBytes, GroupEncoding},
Ciphersuite,
WrappedGroup, Id, GroupIo,
};
use ec_divisors::DivisorCurve;
@@ -27,10 +27,10 @@ use generalized_bulletproofs_ec_gadgets::*;
/// A pair of curves to perform the eVRF with.
pub trait Curves {
/// The towering curve, for which the resulting key is on.
type ToweringCurve: Ciphersuite<F: FromUniformBytes<64>>;
type ToweringCurve: Id + GroupIo<F: FromUniformBytes<64>>;
/// The embedded curve which participants represent their public keys over.
type EmbeddedCurve: Ciphersuite<
G: DivisorCurve<FieldElement = <Self::ToweringCurve as Ciphersuite>::F>,
type EmbeddedCurve: GroupIo<
G: DivisorCurve<FieldElement = <Self::ToweringCurve as WrappedGroup>::F>,
>;
/// The parameters to use the embedded curve with the discrete-log gadget.
type EmbeddedCurveParameters: DiscreteLogParameters;
@@ -49,14 +49,14 @@ impl<C: Curves> Generators<C> {
pub fn new(max_threshold: u16, max_participants: u16) -> Generators<C> {
let entropy = <Blake2s256Keyed as KeyInit>::new(&{
let mut key = Array::<u8, <Blake2s256Keyed as KeySizeUser>::KeySize>::default();
let key_len = key.len().min(<C::ToweringCurve as Ciphersuite>::ID.len());
let key_len = key.len().min(<C::ToweringCurve as Id>::ID.len());
{
let key: &mut [u8] = key.as_mut();
key[.. key_len].copy_from_slice(&<C::ToweringCurve as Ciphersuite>::ID[.. key_len])
key[.. key_len].copy_from_slice(&<C::ToweringCurve as Id>::ID[.. key_len])
}
key
})
.chain_update(<C::ToweringCurve as Ciphersuite>::generator().to_bytes())
.chain_update(<C::ToweringCurve as WrappedGroup>::generator().to_bytes())
.finalize()
.into_bytes();
let mut rng = ChaCha20Rng::from_seed(entropy.into());
@@ -71,7 +71,8 @@ impl<C: Curves> Generators<C> {
h_bold.push(crate::sample_point::<C::ToweringCurve>(&mut rng));
}
Self(
BpGenerators::new(<C::ToweringCurve as Ciphersuite>::generator(), h, g_bold, h_bold).unwrap(),
BpGenerators::new(<C::ToweringCurve as WrappedGroup>::generator(), h, g_bold, h_bold)
.unwrap(),
)
}
}
@@ -95,13 +96,3 @@ impl Curves for Ed25519 {
type EmbeddedCurve = embedwards25519::Embedwards25519;
type EmbeddedCurveParameters = embedwards25519::Embedwards25519;
}
/// Ristretto, and an elliptic curve defined over its scalar field (embedwards25519).
#[cfg(any(test, feature = "ristretto"))]
pub struct Ristretto;
#[cfg(any(test, feature = "ristretto"))]
impl Curves for Ristretto {
type ToweringCurve = dalek_ff_group::Ristretto;
type EmbeddedCurve = embedwards25519::Embedwards25519;
type EmbeddedCurveParameters = embedwards25519::Embedwards25519;
}

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
@@ -21,7 +21,7 @@ use ciphersuite::{
ff::{Field, PrimeField},
Group, GroupEncoding,
},
Ciphersuite,
WrappedGroup, GroupIo,
};
use multiexp::multiexp_vartime;
@@ -49,7 +49,7 @@ mod tests;
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Participation<C: Curves> {
proof: Vec<u8>,
encrypted_secret_shares: HashMap<Participant, <C::ToweringCurve as Ciphersuite>::F>,
encrypted_secret_shares: HashMap<Participant, <C::ToweringCurve as WrappedGroup>::F>,
}
impl<C: Curves> Participation<C> {
@@ -79,7 +79,7 @@ impl<C: Curves> Participation<C> {
let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
for i in Participant::iter().take(usize::from(n)) {
encrypted_secret_shares.insert(i, <C::ToweringCurve as Ciphersuite>::read_F(reader)?);
encrypted_secret_shares.insert(i, <C::ToweringCurve as GroupIo>::read_F(reader)?);
}
Ok(Self { proof, encrypted_secret_shares })
@@ -151,14 +151,14 @@ pub enum VerifyResult<C: Curves> {
pub struct Dkg<C: Curves> {
t: u16,
n: u16,
evrf_public_keys: Vec<<C::EmbeddedCurve as Ciphersuite>::G>,
verification_shares: HashMap<Participant, <C::ToweringCurve as Ciphersuite>::G>,
evrf_public_keys: Vec<<C::EmbeddedCurve as WrappedGroup>::G>,
verification_shares: HashMap<Participant, <C::ToweringCurve as WrappedGroup>::G>,
#[allow(clippy::type_complexity)]
encrypted_secret_shares: HashMap<
Participant,
HashMap<
Participant,
([<C::EmbeddedCurve as Ciphersuite>::G; 2], <C::ToweringCurve as Ciphersuite>::F),
([<C::EmbeddedCurve as WrappedGroup>::G; 2], <C::ToweringCurve as WrappedGroup>::F),
>,
>,
}
@@ -167,7 +167,7 @@ impl<C: Curves> Dkg<C> {
// Form the initial transcript for the proofs.
fn initial_transcript(
invocation: [u8; 32],
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
evrf_public_keys: &[<C::EmbeddedCurve as WrappedGroup>::G],
t: u16,
) -> [u8; 32] {
let mut transcript = Blake2s256::new();
@@ -188,8 +188,8 @@ impl<C: Curves> Dkg<C> {
generators: &Generators<C>,
context: [u8; 32],
t: u16,
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
evrf_public_keys: &[<C::EmbeddedCurve as WrappedGroup>::G],
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as WrappedGroup>::F>,
) -> Result<Participation<C>, Error> {
let Ok(n) = u16::try_from(evrf_public_keys.len()) else {
Err(Error::TooManyParticipants { provided: evrf_public_keys.len() })?
@@ -202,7 +202,8 @@ impl<C: Curves> Dkg<C> {
};
// This also ensures the private key is not 0, due to the prior check the identity point wasn't
// present
let evrf_public_key = <C::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref();
let evrf_public_key =
<C::EmbeddedCurve as WrappedGroup>::generator() * evrf_private_key.deref();
if !evrf_public_keys.contains(&evrf_public_key) {
Err(Error::NotAParticipant)?;
};
@@ -231,7 +232,7 @@ impl<C: Curves> Dkg<C> {
let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n));
for (l, encryption_key) in Participant::iter().take(usize::from(n)).zip(encryption_keys) {
let share = polynomial::<<C::ToweringCurve as Ciphersuite>::F>(&coefficients, l);
let share = polynomial::<<C::ToweringCurve as WrappedGroup>::F>(&coefficients, l);
encrypted_secret_shares.insert(l, *share + *encryption_key);
}
@@ -243,26 +244,26 @@ impl<C: Curves> Dkg<C> {
#[allow(clippy::type_complexity)]
fn verifiable_encryption_statements<C: Curves>(
rng: &mut (impl RngCore + CryptoRng),
coefficients: &[<C::ToweringCurve as Ciphersuite>::G],
encryption_key_commitments: &[<C::ToweringCurve as Ciphersuite>::G],
encrypted_secret_shares: &HashMap<Participant, <C::ToweringCurve as Ciphersuite>::F>,
coefficients: &[<C::ToweringCurve as WrappedGroup>::G],
encryption_key_commitments: &[<C::ToweringCurve as WrappedGroup>::G],
encrypted_secret_shares: &HashMap<Participant, <C::ToweringCurve as WrappedGroup>::F>,
) -> (
<C::ToweringCurve as Ciphersuite>::F,
Vec<(<C::ToweringCurve as Ciphersuite>::F, <C::ToweringCurve as Ciphersuite>::G)>,
<C::ToweringCurve as WrappedGroup>::F,
Vec<(<C::ToweringCurve as WrappedGroup>::F, <C::ToweringCurve as WrappedGroup>::G)>,
) {
let mut g_scalar = <C::ToweringCurve as Ciphersuite>::F::ZERO;
let mut g_scalar = <C::ToweringCurve as WrappedGroup>::F::ZERO;
let mut pairs = Vec::with_capacity(coefficients.len() + encryption_key_commitments.len());
// Push on the commitments to the polynomial being secret-shared
for coefficient in coefficients {
// This uses `0` as we'll add to it later, given its fixed position
pairs.push((<C::ToweringCurve as Ciphersuite>::F::ZERO, *coefficient));
pairs.push((<C::ToweringCurve as WrappedGroup>::F::ZERO, *coefficient));
}
for (i, encrypted_secret_share) in encrypted_secret_shares {
let encryption_key_commitment = encryption_key_commitments[usize::from(u16::from(*i)) - 1];
let weight = <C::ToweringCurve as Ciphersuite>::F::random(&mut *rng);
let weight = <C::ToweringCurve as WrappedGroup>::F::random(&mut *rng);
/*
The encrypted secret share scaling `G`, minus the encryption key commitment, minus the
@@ -274,7 +275,7 @@ fn verifiable_encryption_statements<C: Curves>(
pairs.push((weight, encryption_key_commitment));
// Calculate the commitment to the secret share via the commitments to the polynomial
{
let i = <C::ToweringCurve as Ciphersuite>::F::from(u64::from(u16::from(*i)));
let i = <C::ToweringCurve as WrappedGroup>::F::from(u64::from(u16::from(*i)));
(0 .. coefficients.len()).fold(weight, |exp, j| {
pairs[j].0 += exp;
exp * i
@@ -300,7 +301,7 @@ impl<C: Curves> Dkg<C> {
generators: &Generators<C>,
context: [u8; 32],
t: u16,
evrf_public_keys: &[<C::EmbeddedCurve as Ciphersuite>::G],
evrf_public_keys: &[<C::EmbeddedCurve as WrappedGroup>::G],
participations: &HashMap<Participant, Participation<C>>,
) -> Result<VerifyResult<C>, Error> {
let Ok(n) = u16::try_from(evrf_public_keys.len()) else {
@@ -386,7 +387,7 @@ impl<C: Curves> Dkg<C> {
{
let mut share_verification_statements_actual = HashMap::with_capacity(valid.len());
if !{
let mut g_scalar = <C::ToweringCurve as Ciphersuite>::F::ZERO;
let mut g_scalar = <C::ToweringCurve as WrappedGroup>::F::ZERO;
let mut pairs = Vec::with_capacity(valid.len() * (usize::from(t) + evrf_public_keys.len()));
for (i, (encrypted_secret_shares, data)) in &valid {
let (this_g_scalar, mut these_pairs) = verifiable_encryption_statements::<C>(
@@ -417,9 +418,11 @@ impl<C: Curves> Dkg<C> {
let sum_encrypted_secret_share = sum_encrypted_secret_shares
.get(j)
.copied()
.unwrap_or(<C::ToweringCurve as Ciphersuite>::F::ZERO);
let sum_mask =
sum_masks.get(j).copied().unwrap_or(<C::ToweringCurve as Ciphersuite>::G::identity());
.unwrap_or(<C::ToweringCurve as WrappedGroup>::F::ZERO);
let sum_mask = sum_masks
.get(j)
.copied()
.unwrap_or(<C::ToweringCurve as WrappedGroup>::G::identity());
sum_encrypted_secret_shares.insert(*j, sum_encrypted_secret_share + enc_share);
let j_index = usize::from(u16::from(*j)) - 1;
@@ -487,7 +490,7 @@ impl<C: Curves> Dkg<C> {
for i in Participant::iter().take(usize::from(n)) {
verification_shares.insert(
i,
(<C::ToweringCurve as Ciphersuite>::generator() * sum_encrypted_secret_shares[&i]) -
(<C::ToweringCurve as WrappedGroup>::generator() * sum_encrypted_secret_shares[&i]) -
sum_masks[&i],
);
}
@@ -506,9 +509,10 @@ impl<C: Curves> Dkg<C> {
/// This will return _all_ keys belong to the participant.
pub fn keys(
&self,
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as Ciphersuite>::F>,
evrf_private_key: &Zeroizing<<C::EmbeddedCurve as WrappedGroup>::F>,
) -> Vec<ThresholdKeys<C::ToweringCurve>> {
let evrf_public_key = <C::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref();
let evrf_public_key =
<C::EmbeddedCurve as WrappedGroup>::generator() * evrf_private_key.deref();
let mut is = Vec::with_capacity(1);
for (i, evrf_key) in Participant::iter().zip(self.evrf_public_keys.iter()) {
if *evrf_key == evrf_public_key {
@@ -518,14 +522,14 @@ impl<C: Curves> Dkg<C> {
let mut res = Vec::with_capacity(is.len());
for i in is {
let mut secret_share = Zeroizing::new(<C::ToweringCurve as Ciphersuite>::F::ZERO);
let mut secret_share = Zeroizing::new(<C::ToweringCurve as WrappedGroup>::F::ZERO);
for shares in self.encrypted_secret_shares.values() {
let (ecdh_commitments, encrypted_secret_share) = shares[&i];
let mut ecdh = Zeroizing::new(<C::ToweringCurve as Ciphersuite>::F::ZERO);
let mut ecdh = Zeroizing::new(<C::ToweringCurve as WrappedGroup>::F::ZERO);
for point in ecdh_commitments {
let (mut x, mut y) =
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(point * evrf_private_key.deref()).unwrap();
<C::EmbeddedCurve as WrappedGroup>::G::to_xy(point * evrf_private_key.deref()).unwrap();
*ecdh += x;
x.zeroize();
y.zeroize();
@@ -534,7 +538,7 @@ impl<C: Curves> Dkg<C> {
}
debug_assert_eq!(
self.verification_shares[&i],
<C::ToweringCurve as Ciphersuite>::G::generator() * secret_share.deref()
<C::ToweringCurve as WrappedGroup>::generator() * secret_share.deref()
);
res.push(

View File

@@ -8,7 +8,7 @@ use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use ciphersuite::{group::ff::Field, Ciphersuite};
use ciphersuite::{group::ff::Field, WrappedGroup};
use generalized_bulletproofs::{
Generators, BatchVerifier, PedersenCommitment, PedersenVectorCommitment,
@@ -28,8 +28,8 @@ mod tape;
use tape::*;
type EmbeddedPoint<C> = (
<<<C as Curves>::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::FieldElement,
<<<C as Curves>::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::FieldElement,
<<<C as Curves>::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::FieldElement,
<<<C as Curves>::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::FieldElement,
);
#[allow(non_snake_case)]
@@ -37,14 +37,15 @@ struct Circuit<
'a,
C: Curves,
CG: Iterator<
Item = ChallengedGenerator<<C::ToweringCurve as Ciphersuite>::F, C::EmbeddedCurveParameters>,
Item = ChallengedGenerator<<C::ToweringCurve as WrappedGroup>::F, C::EmbeddedCurveParameters>,
>,
> {
curve_spec: &'a CurveSpec<<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::FieldElement>,
curve_spec: &'a CurveSpec<<<C::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::FieldElement>,
circuit: &'a mut BpCircuit<C::ToweringCurve>,
challenge: DiscreteLogChallenge<<C::ToweringCurve as Ciphersuite>::F, C::EmbeddedCurveParameters>,
challenge:
DiscreteLogChallenge<<C::ToweringCurve as WrappedGroup>::F, C::EmbeddedCurveParameters>,
challenged_G:
ChallengedGenerator<<C::ToweringCurve as Ciphersuite>::F, C::EmbeddedCurveParameters>,
ChallengedGenerator<<C::ToweringCurve as WrappedGroup>::F, C::EmbeddedCurveParameters>,
challenged_generators: &'a mut CG,
tape: Tape,
pedersen_commitment_tape: PedersenCommitmentTape,
@@ -54,7 +55,7 @@ impl<
'a,
C: Curves,
CG: Iterator<
Item = ChallengedGenerator<<C::ToweringCurve as Ciphersuite>::F, C::EmbeddedCurveParameters>,
Item = ChallengedGenerator<<C::ToweringCurve as WrappedGroup>::F, C::EmbeddedCurveParameters>,
>,
> Circuit<'a, C, CG>
{
@@ -92,7 +93,7 @@ impl<
&self.challenge,
&challenged_generator,
);
lincomb = lincomb.term(<C::ToweringCurve as Ciphersuite>::F::ONE, point.x());
lincomb = lincomb.term(<C::ToweringCurve as WrappedGroup>::F::ONE, point.x());
}
/*
Constrain the sum of the two `x` coordinates to be equal to the value committed to in a
@@ -137,7 +138,7 @@ impl<
&self.challenge,
&challenged_public_key,
);
lincomb = lincomb.term(<C::ToweringCurve as Ciphersuite>::F::ONE, point.x());
lincomb = lincomb.term(<C::ToweringCurve as WrappedGroup>::F::ONE, point.x());
debug_assert!(point_with_dlogs.next().is_none());
}
@@ -152,20 +153,20 @@ impl<
/// The result of proving.
pub(super) struct ProveResult<C: Curves> {
/// The coefficients for use in the DKG.
pub(super) coefficients: Vec<Zeroizing<<C::ToweringCurve as Ciphersuite>::F>>,
pub(super) coefficients: Vec<Zeroizing<<C::ToweringCurve as WrappedGroup>::F>>,
/// The masks to encrypt secret shares with.
pub(super) encryption_keys: Vec<Zeroizing<<C::ToweringCurve as Ciphersuite>::F>>,
pub(super) encryption_keys: Vec<Zeroizing<<C::ToweringCurve as WrappedGroup>::F>>,
/// The proof itself.
pub(super) proof: Vec<u8>,
}
pub(super) struct Verified<C: Curves> {
/// The commitments to the coefficients used within the DKG.
pub(super) coefficients: Vec<<C::ToweringCurve as Ciphersuite>::G>,
pub(super) coefficients: Vec<<C::ToweringCurve as WrappedGroup>::G>,
/// The ephemeral public keys to perform ECDHs with
pub(super) ecdh_commitments: Vec<[<C::EmbeddedCurve as Ciphersuite>::G; 2]>,
pub(super) ecdh_commitments: Vec<[<C::EmbeddedCurve as WrappedGroup>::G; 2]>,
/// The commitments to the masks used to encrypt secret shares with.
pub(super) encryption_key_commitments: Vec<<C::ToweringCurve as Ciphersuite>::G>,
pub(super) encryption_key_commitments: Vec<<C::ToweringCurve as WrappedGroup>::G>,
}
impl<C: Curves> fmt::Debug for Verified<C> {
@@ -175,7 +176,7 @@ impl<C: Curves> fmt::Debug for Verified<C> {
}
type GeneratorTable<C> = generalized_bulletproofs_ec_gadgets::GeneratorTable<
<<<C as Curves>::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::FieldElement,
<<<C as Curves>::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::FieldElement,
<C as Curves>::EmbeddedCurveParameters,
>;
@@ -219,7 +220,7 @@ impl<C: Curves> Proof<C> {
}
fn circuit(
curve_spec: &CurveSpec<<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::FieldElement>,
curve_spec: &CurveSpec<<<C::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::FieldElement>,
evrf_public_key: EmbeddedPoint<C>,
coefficients: usize,
ecdh_commitments: &[[EmbeddedPoint<C>; 2]],
@@ -281,7 +282,7 @@ impl<C: Curves> Proof<C> {
fn sample_coefficients_evrf_points(
seed: [u8; 32],
coefficients: usize,
) -> Vec<<C::EmbeddedCurve as Ciphersuite>::G> {
) -> Vec<<C::EmbeddedCurve as WrappedGroup>::G> {
let mut rng = ChaCha20Rng::from_seed(seed);
let quantity = 2 * coefficients;
let mut res = Vec::with_capacity(quantity);
@@ -293,28 +294,29 @@ impl<C: Curves> Proof<C> {
/// Create the required tables for the generators.
fn generator_tables(
coefficients_evrf_points: &[<C::EmbeddedCurve as Ciphersuite>::G],
participants: &[<<C as Curves>::EmbeddedCurve as Ciphersuite>::G],
coefficients_evrf_points: &[<C::EmbeddedCurve as WrappedGroup>::G],
participants: &[<<C as Curves>::EmbeddedCurve as WrappedGroup>::G],
) -> Vec<GeneratorTable<C>> {
let curve_spec = CurveSpec {
a: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G::a(),
b: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G::b(),
a: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G::a(),
b: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G::b(),
};
let mut generator_tables =
Vec::with_capacity(1 + coefficients_evrf_points.len() + participants.len());
{
let (x, y) =
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(<C::EmbeddedCurve as Ciphersuite>::generator())
.unwrap();
let (x, y) = <C::EmbeddedCurve as WrappedGroup>::G::to_xy(
<C::EmbeddedCurve as WrappedGroup>::generator(),
)
.unwrap();
generator_tables.push(GeneratorTable::<C>::new(&curve_spec, x, y));
}
for generator in coefficients_evrf_points {
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(*generator).unwrap();
let (x, y) = <C::EmbeddedCurve as WrappedGroup>::G::to_xy(*generator).unwrap();
generator_tables.push(GeneratorTable::<C>::new(&curve_spec, x, y));
}
for generator in participants {
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(*generator).unwrap();
let (x, y) = <C::EmbeddedCurve as WrappedGroup>::G::to_xy(*generator).unwrap();
generator_tables.push(GeneratorTable::<C>::new(&curve_spec, x, y));
}
generator_tables
@@ -325,12 +327,12 @@ impl<C: Curves> Proof<C> {
generators: &Generators<C::ToweringCurve>,
transcript: [u8; 32],
coefficients: usize,
participant_public_keys: &[<<C as Curves>::EmbeddedCurve as Ciphersuite>::G],
evrf_private_key: &Zeroizing<<<C as Curves>::EmbeddedCurve as Ciphersuite>::F>,
participant_public_keys: &[<<C as Curves>::EmbeddedCurve as WrappedGroup>::G],
evrf_private_key: &Zeroizing<<<C as Curves>::EmbeddedCurve as WrappedGroup>::F>,
) -> Result<ProveResult<C>, AcProveError> {
let curve_spec = CurveSpec {
a: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G::a(),
b: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G::b(),
a: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G::a(),
b: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G::b(),
};
let coefficients_evrf_points = Self::sample_coefficients_evrf_points(transcript, coefficients);
@@ -340,7 +342,7 @@ impl<C: Curves> Proof<C> {
// Push a discrete logarithm onto the tape
let discrete_log =
|vector_commitment_tape: &mut Vec<_>,
dlog: &ScalarDecomposition<<<C as Curves>::EmbeddedCurve as Ciphersuite>::F>| {
dlog: &ScalarDecomposition<<<C as Curves>::EmbeddedCurve as WrappedGroup>::F>| {
for coefficient in dlog.decomposition() {
vector_commitment_tape.push(<_>::from(*coefficient));
}
@@ -351,8 +353,8 @@ impl<C: Curves> Proof<C> {
// Returns the point for which the claim was made.
let discrete_log_claim =
|vector_commitment_tape: &mut Vec<_>,
dlog: &ScalarDecomposition<<<C as Curves>::EmbeddedCurve as Ciphersuite>::F>,
generator: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G| {
dlog: &ScalarDecomposition<<<C as Curves>::EmbeddedCurve as WrappedGroup>::F>,
generator: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G| {
{
let divisor =
Zeroizing::new(dlog.scalar_mul_divisor(generator).normalize_x_coefficient());
@@ -368,12 +370,12 @@ impl<C: Curves> Proof<C> {
.y_coefficients
.first()
.copied()
.unwrap_or(<C::ToweringCurve as Ciphersuite>::F::ZERO),
.unwrap_or(<C::ToweringCurve as WrappedGroup>::F::ZERO),
);
}
let dh = generator * dlog.scalar();
let (x, y) = <C::EmbeddedCurve as Ciphersuite>::G::to_xy(dh).unwrap();
let (x, y) = <C::EmbeddedCurve as WrappedGroup>::G::to_xy(dh).unwrap();
vector_commitment_tape.push(x);
vector_commitment_tape.push(y);
(dh, (x, y))
@@ -387,7 +389,7 @@ impl<C: Curves> Proof<C> {
let mut coefficients = Vec::with_capacity(coefficients);
let evrf_public_key = {
let evrf_private_key =
ScalarDecomposition::<<C::EmbeddedCurve as Ciphersuite>::F>::new(**evrf_private_key)
ScalarDecomposition::<<C::EmbeddedCurve as WrappedGroup>::F>::new(**evrf_private_key)
.expect("eVRF private key was zero");
discrete_log(&mut vector_commitment_tape, &evrf_private_key);
@@ -396,12 +398,12 @@ impl<C: Curves> Proof<C> {
let (_, evrf_public_key) = discrete_log_claim(
&mut vector_commitment_tape,
&evrf_private_key,
<<C as Curves>::EmbeddedCurve as Ciphersuite>::generator(),
<<C as Curves>::EmbeddedCurve as WrappedGroup>::generator(),
);
// Push the divisor for each point we use in the eVRF
for pair in coefficients_evrf_points.chunks(2) {
let mut coefficient = Zeroizing::new(<C::ToweringCurve as Ciphersuite>::F::ZERO);
let mut coefficient = Zeroizing::new(<C::ToweringCurve as WrappedGroup>::F::ZERO);
for point in pair {
let (_, (dh_x, _)) =
discrete_log_claim(&mut vector_commitment_tape, &evrf_private_key, *point);
@@ -418,15 +420,16 @@ impl<C: Curves> Proof<C> {
let mut ecdh_commitments = Vec::with_capacity(2 * participant_public_keys.len());
let mut ecdh_commitments_xy = Vec::with_capacity(participant_public_keys.len());
for participant_public_key in participant_public_keys {
let mut ecdh_commitments_xy_i =
[(<C::ToweringCurve as Ciphersuite>::F::ZERO, <C::ToweringCurve as Ciphersuite>::F::ZERO);
2];
let mut encryption_key = Zeroizing::new(<C::ToweringCurve as Ciphersuite>::F::ZERO);
let mut ecdh_commitments_xy_i = [(
<C::ToweringCurve as WrappedGroup>::F::ZERO,
<C::ToweringCurve as WrappedGroup>::F::ZERO,
); 2];
let mut encryption_key = Zeroizing::new(<C::ToweringCurve as WrappedGroup>::F::ZERO);
for ecdh_commitments_xy_i_j_dest in &mut ecdh_commitments_xy_i {
let mut ecdh_ephemeral_secret;
loop {
ecdh_ephemeral_secret =
Zeroizing::new(<C::EmbeddedCurve as Ciphersuite>::F::random(&mut *rng));
Zeroizing::new(<C::EmbeddedCurve as WrappedGroup>::F::random(&mut *rng));
// 0 would produce the identity, which isn't representable within the discrete-log proof.
if bool::from(!ecdh_ephemeral_secret.is_zero()) {
break;
@@ -434,7 +437,7 @@ impl<C: Curves> Proof<C> {
}
let ecdh_ephemeral_secret =
ScalarDecomposition::<<C::EmbeddedCurve as Ciphersuite>::F>::new(*ecdh_ephemeral_secret)
ScalarDecomposition::<<C::EmbeddedCurve as WrappedGroup>::F>::new(*ecdh_ephemeral_secret)
.expect("ECDH ephemeral secret zero");
discrete_log(&mut vector_commitment_tape, &ecdh_ephemeral_secret);
@@ -442,7 +445,7 @@ impl<C: Curves> Proof<C> {
let (ecdh_commitment, ecdh_commitment_xy_i_j) = discrete_log_claim(
&mut vector_commitment_tape,
&ecdh_ephemeral_secret,
<<C as Curves>::EmbeddedCurve as Ciphersuite>::generator(),
<<C as Curves>::EmbeddedCurve as WrappedGroup>::generator(),
);
ecdh_commitments.push(ecdh_commitment);
*ecdh_commitments_xy_i_j_dest = ecdh_commitment_xy_i_j;
@@ -470,7 +473,7 @@ impl<C: Curves> Proof<C> {
for chunk in vector_commitment_tape.chunks(generators_to_use) {
vector_commitments.push(PedersenVectorCommitment {
g_values: chunk.into(),
mask: <C::ToweringCurve as Ciphersuite>::F::random(&mut *rng),
mask: <C::ToweringCurve as WrappedGroup>::F::random(&mut *rng),
});
}
@@ -479,13 +482,13 @@ impl<C: Curves> Proof<C> {
for coefficient in &coefficients {
commitments.push(PedersenCommitment {
value: **coefficient,
mask: <C::ToweringCurve as Ciphersuite>::F::random(&mut *rng),
mask: <C::ToweringCurve as WrappedGroup>::F::random(&mut *rng),
});
}
for enc_mask in &encryption_keys {
commitments.push(PedersenCommitment {
value: **enc_mask,
mask: <C::ToweringCurve as Ciphersuite>::F::random(&mut *rng),
mask: <C::ToweringCurve as WrappedGroup>::F::random(&mut *rng),
});
}
@@ -536,13 +539,13 @@ impl<C: Curves> Proof<C> {
}
// Prove the openings of the commitments were correct
let mut x = Zeroizing::new(<C::ToweringCurve as Ciphersuite>::F::ZERO);
let mut x = Zeroizing::new(<C::ToweringCurve as WrappedGroup>::F::ZERO);
for commitment in commitments {
*x += commitment.mask * transcript.challenge::<C::ToweringCurve>();
}
// Produce a Schnorr PoK for the weighted-sum of the Pedersen commitments' blinding factors
let r = Zeroizing::new(<C::ToweringCurve as Ciphersuite>::F::random(&mut *rng));
let r = Zeroizing::new(<C::ToweringCurve as WrappedGroup>::F::random(&mut *rng));
transcript.push_point(&(generators.h() * r.deref()));
let c = transcript.challenge::<C::ToweringCurve>();
transcript.push_scalar((c * x.deref()) + r.deref());
@@ -557,14 +560,14 @@ impl<C: Curves> Proof<C> {
verifier: &mut BatchVerifier<C::ToweringCurve>,
transcript: [u8; 32],
coefficients: usize,
participant_public_keys: &[<<C as Curves>::EmbeddedCurve as Ciphersuite>::G],
evrf_public_key: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G,
participant_public_keys: &[<<C as Curves>::EmbeddedCurve as WrappedGroup>::G],
evrf_public_key: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G,
proof: &[u8],
) -> Result<Verified<C>, ()> {
let (mut transcript, ecdh_commitments, pedersen_commitments) = {
let curve_spec = CurveSpec {
a: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G::a(),
b: <<C as Curves>::EmbeddedCurve as Ciphersuite>::G::b(),
a: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G::a(),
b: <<C as Curves>::EmbeddedCurve as WrappedGroup>::G::b(),
};
let coefficients_evrf_points =
@@ -600,9 +603,9 @@ impl<C: Curves> Proof<C> {
ecdh_commitments.push(ecdh_commitments_i);
// This inherently bans using the identity point, as it won't have an affine representation
ecdh_commitments_xy.push([
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_commitments_i[0])
<<C::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::to_xy(ecdh_commitments_i[0])
.ok_or(())?,
<<C::EmbeddedCurve as Ciphersuite>::G as DivisorCurve>::to_xy(ecdh_commitments_i[1])
<<C::EmbeddedCurve as WrappedGroup>::G as DivisorCurve>::to_xy(ecdh_commitments_i[1])
.ok_or(())?,
]);
}
@@ -610,7 +613,7 @@ impl<C: Curves> Proof<C> {
let mut circuit = BpCircuit::verify();
Self::circuit(
&curve_spec,
<C::EmbeddedCurve as Ciphersuite>::G::to_xy(evrf_public_key).ok_or(())?,
<C::EmbeddedCurve as WrappedGroup>::G::to_xy(evrf_public_key).ok_or(())?,
coefficients,
&ecdh_commitments_xy,
&generator_tables.iter().collect::<Vec<_>>(),

View File

@@ -4,11 +4,11 @@ use zeroize::Zeroizing;
use rand_core::OsRng;
use rand::seq::SliceRandom;
use ciphersuite::{group::ff::Field, Ciphersuite};
use ciphersuite::{group::ff::Field, WrappedGroup};
use embedwards25519::Embedwards25519;
use dkg_recovery::recover_key;
use crate::{Participant, Curves, Generators, VerifyResult, Dkg, Ristretto};
use crate::{Participant, Curves, Generators, VerifyResult, Dkg, Ed25519};
mod proof;
@@ -17,14 +17,14 @@ const PARTICIPANTS: u16 = 5;
#[test]
fn dkg() {
let generators = Generators::<Ristretto>::new(THRESHOLD, PARTICIPANTS);
let generators = Generators::<Ed25519>::new(THRESHOLD, PARTICIPANTS);
let context = [0; 32];
let mut priv_keys = vec![];
let mut pub_keys = vec![];
for i in 0 .. PARTICIPANTS {
let priv_key = <Embedwards25519 as Ciphersuite>::F::random(&mut OsRng);
pub_keys.push(<Embedwards25519 as Ciphersuite>::generator() * priv_key);
let priv_key = <Embedwards25519 as WrappedGroup>::F::random(&mut OsRng);
pub_keys.push(<Embedwards25519 as WrappedGroup>::generator() * priv_key);
priv_keys.push((Participant::new(1 + i).unwrap(), Zeroizing::new(priv_key)));
}
@@ -34,27 +34,15 @@ fn dkg() {
for (i, priv_key) in priv_keys.iter().take(usize::from(THRESHOLD)) {
participations.insert(
*i,
Dkg::<Ristretto>::participate(
&mut OsRng,
&generators,
context,
THRESHOLD,
&pub_keys,
priv_key,
)
.unwrap(),
Dkg::<Ed25519>::participate(&mut OsRng, &generators, context, THRESHOLD, &pub_keys, priv_key)
.unwrap(),
);
}
let VerifyResult::Valid(dkg) = Dkg::<Ristretto>::verify(
&mut OsRng,
&generators,
context,
THRESHOLD,
&pub_keys,
&participations,
)
.unwrap() else {
let VerifyResult::Valid(dkg) =
Dkg::<Ed25519>::verify(&mut OsRng, &generators, context, THRESHOLD, &pub_keys, &participations)
.unwrap()
else {
panic!("verify didn't return VerifyResult::Valid")
};
@@ -80,7 +68,7 @@ fn dkg() {
// TODO: Test for all possible combinations of keys
assert_eq!(
<<Ristretto as Curves>::ToweringCurve as Ciphersuite>::generator() *
<<Ed25519 as Curves>::ToweringCurve as WrappedGroup>::generator() *
*recover_key(&all_keys.values().cloned().collect::<Vec<_>>()).unwrap(),
group_key.unwrap()
);

View File

@@ -6,13 +6,13 @@ use zeroize::Zeroizing;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite,
WrappedGroup,
};
use generalized_bulletproofs::{Generators, tests::insecure_test_generators};
use crate::{
Curves, Ristretto,
Curves, Ed25519,
proof::*,
tests::{THRESHOLD, PARTICIPANTS},
};
@@ -20,9 +20,9 @@ use crate::{
fn proof<C: Curves>() {
let generators = insecure_test_generators(&mut OsRng, 2048).unwrap();
let embedded_private_key =
Zeroizing::new(<C::EmbeddedCurve as Ciphersuite>::F::random(&mut OsRng));
Zeroizing::new(<C::EmbeddedCurve as WrappedGroup>::F::random(&mut OsRng));
let ecdh_public_keys: [_; PARTICIPANTS as usize] =
core::array::from_fn(|_| <C::EmbeddedCurve as Ciphersuite>::G::random(&mut OsRng));
core::array::from_fn(|_| <C::EmbeddedCurve as WrappedGroup>::G::random(&mut OsRng));
let time = Instant::now();
let res = Proof::<C>::prove(
&mut OsRng,
@@ -54,5 +54,5 @@ fn proof<C: Curves>() {
#[test]
fn ristretto_proof() {
proof::<Ristretto>();
proof::<Ed25519>();
}

View File

@@ -5,7 +5,7 @@ use rand_core::{RngCore, CryptoRng};
use ciphersuite::{
group::{ff::PrimeField, Group, GroupEncoding},
Ciphersuite,
GroupIo,
};
use dkg::Participant;
@@ -13,7 +13,7 @@ use dkg::Participant;
/// Sample a random, unbiased point on the elliptic curve with an unknown discrete logarithm.
///
/// This keeps it simple by using rejection sampling.
pub(crate) fn sample_point<C: Ciphersuite>(rng: &mut (impl RngCore + CryptoRng)) -> C::G {
pub(crate) fn sample_point<C: GroupIo>(rng: &mut (impl RngCore + CryptoRng)) -> C::G {
let mut repr = <C::G as GroupEncoding>::Repr::default();
loop {
rng.fill_bytes(repr.as_mut());

View File

@@ -23,7 +23,7 @@ rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false, features = ["alloc"] }
multiexp = { path = "../../multiexp", version = "0.4", default-features = false }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -4,7 +4,7 @@ use zeroize::Zeroizing;
use rand_core::OsRng;
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use ciphersuite::WrappedGroup;
use dkg_recovery::recover_key;
use crate::*;
@@ -17,21 +17,21 @@ pub fn test_musig() {
let mut keys = vec![];
let mut pub_keys = vec![];
for _ in 0 .. PARTICIPANTS {
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
pub_keys.push(<Ristretto as Ciphersuite>::generator() * *key);
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng));
pub_keys.push(<Ristretto as WrappedGroup>::generator() * *key);
keys.push(key);
}
const CONTEXT: [u8; 32] = *b"MuSig Test ";
// Empty signing set
musig::<Ristretto>(CONTEXT, Zeroizing::new(<Ristretto as Ciphersuite>::F::ZERO), &[])
musig::<Ristretto>(CONTEXT, Zeroizing::new(<Ristretto as WrappedGroup>::F::ZERO), &[])
.unwrap_err();
// Signing set we're not part of
musig::<Ristretto>(
CONTEXT,
Zeroizing::new(<Ristretto as Ciphersuite>::F::ZERO),
&[<Ristretto as Ciphersuite>::generator()],
Zeroizing::new(<Ristretto as WrappedGroup>::F::ZERO),
&[<Ristretto as WrappedGroup>::generator()],
)
.unwrap_err();
@@ -48,7 +48,7 @@ pub fn test_musig() {
verification_shares.insert(
these_keys.params().i(),
<Ristretto as Ciphersuite>::generator() * **these_keys.original_secret_share(),
<Ristretto as WrappedGroup>::generator() * **these_keys.original_secret_share(),
);
assert_eq!(these_keys.group_key(), group_key);
@@ -63,7 +63,7 @@ pub fn test_musig() {
}
assert_eq!(
<Ristretto as Ciphersuite>::generator() *
<Ristretto as WrappedGroup>::generator() *
*recover_key(&created_keys.values().cloned().collect::<Vec<_>>()).unwrap(),
group_key
);

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![no_std]
@@ -8,7 +8,7 @@ use alloc::vec::Vec;
use zeroize::Zeroizing;
use ciphersuite::Ciphersuite;
use ciphersuite::{GroupIo, Id};
pub use dkg::*;
@@ -34,7 +34,7 @@ pub enum RecoveryError {
}
/// Recover a shared secret from a collection of `dkg::ThresholdKeys`.
pub fn recover_key<C: Ciphersuite>(
pub fn recover_key<C: GroupIo + Id>(
keys: &[ThresholdKeys<C>],
) -> Result<Zeroizing<C::F>, RecoveryError> {
let included = keys.iter().map(|keys| keys.params().i()).collect::<Vec<_>>();

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
@@ -17,12 +17,11 @@ use ciphersuite::{
ff::{Field, PrimeField},
GroupEncoding,
},
Ciphersuite,
GroupIo, Id,
};
/// The ID of a participant, defined as a non-zero u16.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Zeroize)]
#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize))]
pub struct Participant(u16);
impl Participant {
/// Create a new Participant identifier from a u16.
@@ -129,18 +128,8 @@ pub enum DkgError {
NotParticipating,
}
// Manually implements BorshDeserialize so we can enforce it's a valid index
#[cfg(feature = "borsh")]
impl borsh::BorshDeserialize for Participant {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Participant::new(u16::deserialize_reader(reader)?)
.ok_or_else(|| io::Error::other("invalid participant"))
}
}
/// Parameters for a multisig.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize))]
pub struct ThresholdParams {
/// Participants needed to sign on behalf of the group.
t: u16,
@@ -210,16 +199,6 @@ impl ThresholdParams {
}
}
#[cfg(feature = "borsh")]
impl borsh::BorshDeserialize for ThresholdParams {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let t = u16::deserialize_reader(reader)?;
let n = u16::deserialize_reader(reader)?;
let i = Participant::deserialize_reader(reader)?;
ThresholdParams::new(t, n, i).map_err(|e| io::Error::other(format!("{e:?}")))
}
}
/// A method of interpolation.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub enum Interpolation<F: Zeroize + PrimeField> {
@@ -268,7 +247,7 @@ impl<F: Zeroize + PrimeField> Interpolation<F> {
/// heap-allocated pointer to minimize copies on the stack (`ThresholdKeys`, the publicly exposed
/// type).
#[derive(Clone, PartialEq, Eq)]
struct ThresholdCore<C: Ciphersuite> {
struct ThresholdCore<C: GroupIo + Id> {
params: ThresholdParams,
group_key: C::G,
verification_shares: HashMap<Participant, C::G>,
@@ -276,7 +255,7 @@ struct ThresholdCore<C: Ciphersuite> {
secret_share: Zeroizing<C::F>,
}
impl<C: Ciphersuite> fmt::Debug for ThresholdCore<C> {
impl<C: GroupIo + Id> fmt::Debug for ThresholdCore<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("ThresholdCore")
@@ -288,7 +267,7 @@ impl<C: Ciphersuite> fmt::Debug for ThresholdCore<C> {
}
}
impl<C: Ciphersuite> Zeroize for ThresholdCore<C> {
impl<C: GroupIo + Id> Zeroize for ThresholdCore<C> {
fn zeroize(&mut self) {
self.params.zeroize();
self.group_key.zeroize();
@@ -302,7 +281,7 @@ impl<C: Ciphersuite> Zeroize for ThresholdCore<C> {
/// Threshold keys usable for signing.
#[derive(Clone, Debug, Zeroize)]
pub struct ThresholdKeys<C: Ciphersuite> {
pub struct ThresholdKeys<C: GroupIo + Id> {
// Core keys.
#[zeroize(skip)]
core: Arc<Zeroizing<ThresholdCore<C>>>,
@@ -315,7 +294,7 @@ pub struct ThresholdKeys<C: Ciphersuite> {
/// View of keys, interpolated and with the expected linear combination taken for usage.
#[derive(Clone)]
pub struct ThresholdView<C: Ciphersuite> {
pub struct ThresholdView<C: GroupIo + Id> {
interpolation: Interpolation<C::F>,
scalar: C::F,
offset: C::F,
@@ -326,7 +305,7 @@ pub struct ThresholdView<C: Ciphersuite> {
verification_shares: HashMap<Participant, C::G>,
}
impl<C: Ciphersuite> fmt::Debug for ThresholdView<C> {
impl<C: GroupIo + Id> fmt::Debug for ThresholdView<C> {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt
.debug_struct("ThresholdView")
@@ -341,7 +320,7 @@ impl<C: Ciphersuite> fmt::Debug for ThresholdView<C> {
}
}
impl<C: Ciphersuite> Zeroize for ThresholdView<C> {
impl<C: GroupIo + Id> Zeroize for ThresholdView<C> {
fn zeroize(&mut self) {
self.scalar.zeroize();
self.offset.zeroize();
@@ -357,7 +336,7 @@ impl<C: Ciphersuite> Zeroize for ThresholdView<C> {
}
}
impl<C: Ciphersuite> ThresholdKeys<C> {
impl<C: GroupIo + Id> ThresholdKeys<C> {
/// Create a new set of ThresholdKeys.
pub fn new(
params: ThresholdParams,
@@ -632,7 +611,7 @@ impl<C: Ciphersuite> ThresholdKeys<C> {
let mut verification_shares = HashMap::new();
for l in (1 ..= n).map(Participant) {
verification_shares.insert(l, <C as Ciphersuite>::read_G(reader)?);
verification_shares.insert(l, C::read_G(reader)?);
}
ThresholdKeys::new(
@@ -645,7 +624,7 @@ impl<C: Ciphersuite> ThresholdKeys<C> {
}
}
impl<C: Ciphersuite> ThresholdView<C> {
impl<C: GroupIo + Id> ThresholdView<C> {
/// Return the scalar applied to this view.
pub fn scalar(&self) -> C::F {
self.scalar

View File

@@ -19,8 +19,9 @@ workspace = true
[dependencies]
zeroize = { version = "1", default-features = false, features = ["zeroize_derive"] }
sha3 = { version = "0.11.0-rc.0", default-features = false }
sha3 = { version = "0.11.0-rc.2", default-features = false }
crypto-bigint = { version = "0.6", default-features = false, features = ["zeroize"] }
prime-field = { path = "../prime-field", default-features = false }
ciphersuite = { path = "../ciphersuite", default-features = false }

View File

@@ -1,4 +1,4 @@
use zeroize::Zeroize;
use prime_field::subtle::CtOption;
use sha3::{
digest::{
@@ -8,9 +8,9 @@ use sha3::{
Shake256,
};
use ciphersuite::{group::Group, Ciphersuite};
use ciphersuite::{group::GroupEncoding, Id, WithPreferredHash, GroupCanonicalEncoding};
use crate::{Scalar, Point};
use crate::Point;
/// Shake256, fixed to a 114-byte output, as used by Ed448.
#[derive(Clone, Default)]
@@ -49,21 +49,14 @@ impl FixedOutput for Shake256_114 {
}
impl HashMarker for Shake256_114 {}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ed448;
impl Ciphersuite for Ed448 {
type F = Scalar;
type G = Point;
impl Id for Point {
const ID: &[u8] = b"ed448";
}
impl WithPreferredHash for Point {
type H = Shake256_114;
const ID: &'static [u8] = b"ed448";
fn generator() -> Self::G {
Point::generator()
}
impl GroupCanonicalEncoding for Point {
fn from_canonical_bytes(bytes: &<Self::G as GroupEncoding>::Repr) -> CtOption<Self::G> {
Self::G::from_bytes(bytes)
}
}
#[test]
fn test_ed448() {
ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng);
}

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![no_std]
@@ -29,7 +29,6 @@ mod point;
pub use point::Point;
mod ciphersuite;
pub use crate::ciphersuite::Ed448;
pub(crate) fn u8_from_bool(bit_ref: &mut bool) -> u8 {
use core::hint::black_box;

View File

@@ -7,8 +7,8 @@ use prime_field::{
subtle::{Choice, CtOption, ConstantTimeEq, ConditionallySelectable, ConditionallyNegatable},
zeroize::Zeroize,
rand_core::RngCore,
crypto_bigint::U512,
};
use crypto_bigint::U512;
use ciphersuite::group::{
ff::{Field, PrimeField, PrimeFieldBits},
@@ -18,17 +18,37 @@ use ciphersuite::group::{
use crate::{u8_from_bool, Scalar, FieldElement};
const G_Y: FieldElement = FieldElement::from(&U512::from_be_hex(concat!(
"0000000000000000",
"693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e",
"05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14",
)));
const G_Y: FieldElement = {
let bytes = U512::from_be_hex(concat!(
"0000000000000000",
"693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e",
"05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14",
))
.to_le_bytes();
let mut dest = [0; 57];
let mut i = 0;
while i < dest.len() {
dest[i] = bytes[i];
i += 1;
}
FieldElement::from_bytes(&dest).unwrap()
};
const G_X: FieldElement = FieldElement::from(&U512::from_be_hex(concat!(
"0000000000000000",
"4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324",
"a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e",
)));
const G_X: FieldElement = {
let bytes = U512::from_be_hex(concat!(
"0000000000000000",
"4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324",
"a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e",
))
.to_le_bytes();
let mut dest = [0; 57];
let mut i = 0;
while i < dest.len() {
dest[i] = bytes[i];
i += 1;
}
FieldElement::from_bytes(&dest).unwrap()
};
fn recover_x(y: FieldElement) -> CtOption<FieldElement> {
#[allow(non_snake_case)]

View File

@@ -14,9 +14,9 @@ all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
hex-literal = { version = "0.4", default-features = false }
hex-literal = { version = "1", default-features = false }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false, optional = true }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
@@ -25,12 +25,11 @@ typenum = { version = "1", default-features = false }
prime-field = { path = "../prime-field", default-features = false }
short-weierstrass = { path = "../short-weierstrass", default-features = false }
curve25519-dalek = { version = "4", default-features = false, features = ["legacy_compatibility"] }
dalek-ff-group = { path = "../dalek-ff-group", version = "0.4", default-features = false }
blake2 = { version = "0.11.0-rc.0", default-features = false }
blake2 = { version = "0.11.0-rc.2", default-features = false }
ciphersuite = { path = "../ciphersuite", version = "0.4", default-features = false }
generalized-bulletproofs-ec-gadgets = { git = "https://github.com/monero-oxide/monero-oxide", rev = "a6f8797007e768488568b821435cf5006517a962", default-features = false, optional = true }
generalized-bulletproofs-ec-gadgets = { git = "https://github.com/monero-oxide/monero-oxide", rev = "dc1b3dbe436aae61ec363505052d4715d38ce1df", default-features = false, optional = true }
[dev-dependencies]
hex = "0.4"
@@ -40,6 +39,6 @@ rand_core = { version = "0.6", features = ["std"] }
ff-group-tests = { path = "../ff-group-tests" }
[features]
alloc = ["std-shims", "zeroize/alloc", "prime-field/alloc", "short-weierstrass/alloc", "curve25519-dalek/alloc", "blake2/alloc", "ciphersuite/alloc", "generalized-bulletproofs-ec-gadgets"]
alloc = ["zeroize/alloc", "prime-field/alloc", "short-weierstrass/alloc", "curve25519-dalek/alloc", "blake2/alloc", "ciphersuite/alloc", "generalized-bulletproofs-ec-gadgets"]
std = ["alloc", "std-shims/std", "zeroize/std", "prime-field/std", "short-weierstrass/std", "ciphersuite/std", "generalized-bulletproofs-ec-gadgets/std"]
default = ["std"]

View File

@@ -1,21 +1,17 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(feature = "alloc")]
#[allow(unused_imports)]
use std_shims::prelude::*;
#[cfg(feature = "alloc")]
use std_shims::io::{self, Read};
use prime_field::{subtle::Choice, zeroize::Zeroize};
use ciphersuite::group::{
ff::{Field, PrimeField},
Group,
use prime_field::{
subtle::{Choice, CtOption},
zeroize::Zeroize,
};
use ciphersuite::group::{ff::PrimeField, Group, GroupEncoding};
use curve25519_dalek::Scalar as DalekScalar;
pub use dalek_ff_group::Scalar as FieldElement;
pub use curve25519_dalek::Scalar as FieldElement;
use short_weierstrass::{ShortWeierstrass, Affine, Projective};
@@ -32,18 +28,18 @@ pub struct Embedwards25519;
#[allow(deprecated)] // No other way to construct arbitrary `FieldElement` at compile-time :/
impl ShortWeierstrass for Embedwards25519 {
type FieldElement = FieldElement;
const A: FieldElement = FieldElement(DalekScalar::from_bits(hex_literal::hex!(
const A: FieldElement = FieldElement::from_bits(hex_literal::hex!(
"ead3f55c1a631258d69cf7a2def9de1400000000000000000000000000000010"
)));
const B: FieldElement = FieldElement(DalekScalar::from_bits(hex_literal::hex!(
));
const B: FieldElement = FieldElement::from_bits(hex_literal::hex!(
"5f07603a853f20370b682036210d463e64903a23ea669d07ca26cfc13f594209"
)));
));
const PRIME_ORDER: bool = true;
const GENERATOR: Affine<Self> = Affine::from_xy_unchecked(
FieldElement::ONE,
FieldElement(DalekScalar::from_bits(hex_literal::hex!(
FieldElement::from_bits(hex_literal::hex!(
"2e4118080a484a3dfbafe2199a0e36b7193581d676c0dadfa376b0265616020c"
))),
)),
);
type Scalar = Scalar;
@@ -80,30 +76,23 @@ impl ShortWeierstrass for Embedwards25519 {
pub type Point = Projective<Embedwards25519>;
impl ciphersuite::Ciphersuite for Embedwards25519 {
impl ciphersuite::WrappedGroup for Embedwards25519 {
type F = Scalar;
type G = Point;
type H = blake2::Blake2b512;
const ID: &'static [u8] = b"embedwards25519";
fn generator() -> Self::G {
Point::generator()
<Point as Group>::generator()
}
// We override the provided impl, which compares against the reserialization, because
// we already require canonicity
#[cfg(feature = "alloc")]
#[allow(non_snake_case)]
fn read_G<R: Read>(reader: &mut R) -> io::Result<Self::G> {
use ciphersuite::group::GroupEncoding;
let mut encoding = <Self::G as GroupEncoding>::Repr::default();
reader.read_exact(encoding.as_mut())?;
let point = Option::<Self::G>::from(Self::G::from_bytes(&encoding))
.ok_or_else(|| io::Error::other("invalid point"))?;
Ok(point)
}
impl ciphersuite::Id for Embedwards25519 {
const ID: &[u8] = b"embedwards25519";
}
impl ciphersuite::WithPreferredHash for Embedwards25519 {
type H = blake2::Blake2b512;
}
impl ciphersuite::GroupCanonicalEncoding for Embedwards25519 {
fn from_canonical_bytes(bytes: &<Self::G as GroupEncoding>::Repr) -> CtOption<Self::G> {
Self::G::from_bytes(bytes)
}
}
@@ -119,9 +108,8 @@ fn test_curve() {
#[test]
fn generator() {
use ciphersuite::group::{Group, GroupEncoding};
assert_eq!(
Point::generator(),
<Point as Group>::generator(),
Point::from_bytes(&hex_literal::hex!(
"0100000000000000000000000000000000000000000000000000000000000000"
))
@@ -139,6 +127,5 @@ fn zero_x_is_off_curve() {
// Checks random won't infinitely loop
#[test]
fn random() {
use ciphersuite::group::Group;
Point::random(&mut rand_core::OsRng);
}

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
/// Tests for the Field trait.

View File

@@ -1,6 +1,6 @@
[package]
name = "modular-frost"
version = "0.10.1"
version = "0.11.0"
description = "Modular implementation of FROST over ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/frost"
@@ -17,33 +17,35 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
thiserror = { version = "2", default-features = false, features = ["std"] }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false, features = ["alloc"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
thiserror = { version = "2", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
subtle = { version = "^2.4", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["alloc"] }
rand_chacha = { version = "0.3", default-features = false }
hex = { version = "0.4", default-features = false, features = ["std"], optional = true }
zeroize = { version = "^1.5", default-features = false, features = ["alloc", "zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false, features = ["std", "recommended"] }
hex = { version = "0.4", default-features = false, features = ["alloc"], optional = true }
dalek-ff-group = { path = "../dalek-ff-group", version = "0.4", default-features = false, features = ["std"], optional = true }
minimal-ed448 = { path = "../ed448", version = "0.4", default-features = false, features = ["std"], optional = true }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false, features = ["recommended"] }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["std"] }
dalek-ff-group = { path = "../dalek-ff-group", version = "0.5", default-features = false, features = ["alloc"], optional = true }
minimal-ed448 = { path = "../ed448", version = "0.4", default-features = false, features = ["alloc"], optional = true }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] }
sha2 = { version = "0.10.0", default-features = false, optional = true }
elliptic-curve = { version = "0.13", default-features = false, features = ["hash2curve"], optional = true }
ciphersuite-kp256 = { path = "../ciphersuite/kp256", version = "0.4", default-features = false, features = ["std"], optional = true }
ciphersuite-kp256 = { path = "../ciphersuite/kp256", version = "0.4", default-features = false, features = ["alloc"], optional = true }
multiexp = { path = "../multiexp", version = "0.4", default-features = false, features = ["std", "batch"] }
multiexp = { path = "../multiexp", version = "0.4", default-features = false, features = ["alloc", "batch"] }
schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false, features = ["alloc"] }
dkg = { path = "../dkg", version = "0.6.1", default-features = false, features = ["std"] }
dkg-recovery = { path = "../dkg/recovery", version = "0.6", default-features = false, features = ["std"], optional = true }
dkg-dealer = { path = "../dkg/dealer", version = "0.6", default-features = false, features = ["std"], optional = true }
dkg = { path = "../dkg", version = "0.6.1", default-features = false }
dkg-recovery = { path = "../dkg/recovery", version = "0.6", default-features = false, optional = true }
dkg-dealer = { path = "../dkg/dealer", version = "0.6", default-features = false, optional = true }
[dev-dependencies]
hex = "0.4"
@@ -54,6 +56,38 @@ dkg-recovery = { path = "../dkg/recovery", default-features = false, features =
dkg-dealer = { path = "../dkg/dealer", default-features = false, features = ["std"] }
[features]
std = [
"std-shims/std",
"thiserror/std",
"rand_core/std",
"rand_chacha/std",
"zeroize/std",
"subtle/std",
"hex?/std",
"transcript/std",
"dalek-ff-group?/std",
"minimal-ed448?/std",
"ciphersuite/std",
"sha2?/std",
"elliptic-curve?/std",
"ciphersuite-kp256?/std",
"multiexp/std",
"schnorr/std",
"dkg/std",
"dkg-recovery?/std",
"dkg-dealer?/std",
]
ed25519 = ["dalek-ff-group"]
ristretto = ["dalek-ff-group"]
@@ -63,3 +97,5 @@ p256 = ["sha2", "elliptic-curve", "ciphersuite-kp256"]
ed448 = ["minimal-ed448"]
tests = ["hex", "rand_core/getrandom", "dkg-dealer", "dkg-recovery"]
default = ["std"]

View File

@@ -1,5 +1,7 @@
use core::{marker::PhantomData, fmt::Debug};
use std::io::{self, Read, Write};
#[allow(unused_imports)]
use std_shims::prelude::*;
use std_shims::io::{self, Read, Write};
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
@@ -26,8 +28,10 @@ impl<A: Send + Sync + Clone + PartialEq + Debug + WriteAddendum> Addendum for A
/// Algorithm trait usable by the FROST signing machine to produce signatures..
pub trait Algorithm<C: Curve>: Send + Sync {
/// The transcript format this algorithm uses. This likely should NOT be the IETF-compatible
/// transcript included in this crate.
/// The transcript format this algorithm uses.
///
/// This MUST NOT be the IETF-compatible transcript included in this crate UNLESS this is an
/// IETF-specified ciphersuite.
type Transcript: Sync + Clone + Debug + Transcript;
/// Serializable addendum, used in algorithms requiring more data than just the nonces.
type Addendum: Addendum;
@@ -67,8 +71,10 @@ pub trait Algorithm<C: Curve>: Send + Sync {
) -> Result<(), FrostError>;
/// Sign a share with the given secret/nonce.
///
/// The secret will already have been its lagrange coefficient applied so it is the necessary
/// key share.
///
/// The nonce will already have been processed into the combined form d + (e * p).
fn sign_share(
&mut self,
@@ -83,6 +89,7 @@ pub trait Algorithm<C: Curve>: Send + Sync {
fn verify(&self, group_key: C::G, nonces: &[Vec<C::G>], sum: C::F) -> Option<Self::Signature>;
/// Verify a specific share given as a response.
///
/// This function should return a series of pairs whose products should sum to zero for a valid
/// share. Any error raised is treated as the share being invalid.
#[allow(clippy::type_complexity, clippy::result_unit_err)]
@@ -97,8 +104,10 @@ pub trait Algorithm<C: Curve>: Send + Sync {
mod sealed {
pub use super::*;
/// IETF-compliant transcript. This is incredibly naive and should not be used within larger
/// protocols.
/// IETF-compliant transcript.
///
/// This is incredibly naive and MUST NOT be used within larger protocols. No guarantees are made
/// about its safety EXCEPT as used with the IETF-specified FROST ciphersuites.
#[derive(Clone, Debug)]
pub struct IetfTranscript(pub(crate) Vec<u8>);
impl Transcript for IetfTranscript {
@@ -129,6 +138,7 @@ pub(crate) use sealed::IetfTranscript;
/// HRAm usable by the included Schnorr signature algorithm to generate challenges.
pub trait Hram<C: Curve>: Send + Sync + Clone {
/// HRAm function to generate a challenge.
///
/// H2 from the IETF draft, despite having a different argument set (not being pre-formatted).
#[allow(non_snake_case)]
fn hram(R: &C::G, A: &C::G, m: &[u8]) -> C::F;

View File

@@ -1,24 +1,24 @@
use ciphersuite::{digest::Digest, FromUniformBytes, Ciphersuite};
use subtle::CtOption;
use zeroize::Zeroize;
use ciphersuite::{
digest::Digest, group::GroupEncoding, FromUniformBytes, WrappedGroup, WithPreferredHash,
};
use dalek_ff_group::Scalar;
use crate::{curve::Curve, algorithm::Hram};
macro_rules! dalek_curve {
(
$feature: literal,
$Curve: ident,
$Hram: ident,
$CONTEXT: literal,
$chal: literal
) => {
pub use dalek_ff_group::$Curve;
impl Curve for $Curve {
const CONTEXT: &'static [u8] = $CONTEXT;
fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F {
let mut digest = <Self as Ciphersuite>::H::new();
let mut digest = <Self as WithPreferredHash>::H::new();
digest.update(Self::CONTEXT);
digest.update(dst);
digest.update(msg);
@@ -31,8 +31,12 @@ macro_rules! dalek_curve {
pub struct $Hram;
impl Hram<$Curve> for $Hram {
#[allow(non_snake_case)]
fn hram(R: &<$Curve as Ciphersuite>::G, A: &<$Curve as Ciphersuite>::G, m: &[u8]) -> Scalar {
let mut hash = <$Curve as Ciphersuite>::H::new();
fn hram(
R: &<$Curve as WrappedGroup>::G,
A: &<$Curve as WrappedGroup>::G,
m: &[u8],
) -> Scalar {
let mut hash = <$Curve as WithPreferredHash>::H::new();
if $chal.len() != 0 {
hash.update($CONTEXT);
hash.update($chal);
@@ -46,8 +50,31 @@ macro_rules! dalek_curve {
};
}
#[cfg(feature = "ristretto")]
dalek_curve!("ristretto", Ristretto, IetfRistrettoHram, b"FROST-RISTRETTO255-SHA512-v1", b"chal");
/*
FROST defined Ristretto as using SHA2-512, while Blake2b512 is considered "preferred" by
`dalek-ff-group`. We define our own ciphersuite for it accordingly.
*/
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ristretto;
impl WrappedGroup for Ristretto {
type F = Scalar;
type G = dalek_ff_group::RistrettoPoint;
fn generator() -> Self::G {
dalek_ff_group::Ristretto::generator()
}
}
impl ciphersuite::Id for Ristretto {
const ID: &[u8] = b"FROST-RISTRETTO255";
}
impl WithPreferredHash for Ristretto {
type H = <Ed25519 as WithPreferredHash>::H;
}
impl ciphersuite::GroupCanonicalEncoding for Ristretto {
fn from_canonical_bytes(bytes: &<Self::G as GroupEncoding>::Repr) -> CtOption<Self::G> {
dalek_ff_group::Ristretto::from_canonical_bytes(bytes)
}
}
dalek_curve!(Ristretto, IetfRistrettoHram, b"FROST-RISTRETTO255-SHA512-v1", b"chal");
#[cfg(feature = "ed25519")]
dalek_curve!("ed25519", Ed25519, IetfEd25519Hram, b"FROST-ED25519-SHA512-v1", b"");
pub use dalek_ff_group::Ed25519;
dalek_curve!(Ed25519, IetfEd25519Hram, b"FROST-ED25519-SHA512-v1", b"");

View File

@@ -1,6 +1,6 @@
pub use ciphersuite::{digest::Digest, group::GroupEncoding, FromUniformBytes, Ciphersuite};
use minimal_ed448::{Scalar, Point};
pub use minimal_ed448::Ed448;
pub use ciphersuite::{digest::Digest, group::GroupEncoding, FromUniformBytes, WithPreferredHash};
use minimal_ed448::Scalar;
pub use minimal_ed448::Point as Ed448;
use crate::{curve::Curve, algorithm::Hram};
@@ -9,7 +9,7 @@ const CONTEXT: &[u8] = b"FROST-ED448-SHAKE256-v1";
impl Curve for Ed448 {
const CONTEXT: &'static [u8] = CONTEXT;
fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F {
let mut digest = <Self as Ciphersuite>::H::new();
let mut digest = <Self as WithPreferredHash>::H::new();
digest.update(Self::CONTEXT);
digest.update(dst);
digest.update(msg);
@@ -22,8 +22,8 @@ impl Curve for Ed448 {
pub(crate) struct Ietf8032Ed448Hram;
impl Ietf8032Ed448Hram {
#[allow(non_snake_case)]
pub(crate) fn hram(context: &[u8], R: &Point, A: &Point, m: &[u8]) -> Scalar {
let mut digest = <Ed448 as Ciphersuite>::H::new();
pub(crate) fn hram(context: &[u8], R: &Ed448, A: &Ed448, m: &[u8]) -> Scalar {
let mut digest = <Ed448 as WithPreferredHash>::H::new();
digest.update(b"SigEd448");
digest.update([0, u8::try_from(context.len()).unwrap()]);
digest.update(context);
@@ -39,7 +39,7 @@ impl Ietf8032Ed448Hram {
pub struct IetfEd448Hram;
impl Hram<Ed448> for IetfEd448Hram {
#[allow(non_snake_case)]
fn hram(R: &Point, A: &Point, m: &[u8]) -> Scalar {
fn hram(R: &Ed448, A: &Ed448, m: &[u8]) -> Scalar {
Ietf8032Ed448Hram::hram(&[], R, A, m)
}
}

View File

@@ -7,7 +7,7 @@ use ciphersuite::{
ff::{Field, PrimeField},
GroupEncoding,
},
Ciphersuite,
WrappedGroup,
};
use elliptic_curve::{
@@ -20,7 +20,7 @@ use elliptic_curve::{
use crate::{curve::Curve, algorithm::Hram};
#[allow(non_snake_case)]
fn hash_to_F<C: Ciphersuite<F: PrimeField<Repr = GenericArray<u8, U32>>>>(
fn hash_to_F<C: WrappedGroup<F: PrimeField<Repr = GenericArray<u8, U32>>>>(
dst: &[u8],
msg: &[u8],
) -> C::F {
@@ -112,10 +112,10 @@ macro_rules! kp_curve {
impl Hram<$Curve> for $Hram {
#[allow(non_snake_case)]
fn hram(
R: &<$Curve as Ciphersuite>::G,
A: &<$Curve as Ciphersuite>::G,
R: &<$Curve as WrappedGroup>::G,
A: &<$Curve as WrappedGroup>::G,
m: &[u8],
) -> <$Curve as Ciphersuite>::F {
) -> <$Curve as WrappedGroup>::F {
<$Curve as Curve>::hash_to_F(
b"chal",
&[R.to_bytes().as_ref(), A.to_bytes().as_ref(), m].concat(),
@@ -132,7 +132,7 @@ kp_curve!("p256", P256, IetfP256Hram, b"FROST-P256-SHA256-v1");
kp_curve!("secp256k1", Secp256k1, IetfSecp256k1Hram, b"FROST-secp256k1-SHA256-v1");
#[cfg(test)]
fn test_oversize_dst<C: Ciphersuite<F: PrimeField<Repr = GenericArray<u8, U32>>>>() {
fn test_oversize_dst<C: WrappedGroup<F: PrimeField<Repr = GenericArray<u8, U32>>>>() {
use sha2::Digest;
// The draft specifies DSTs >255 bytes should be hashed into a 32-byte DST

View File

@@ -1,26 +1,23 @@
use core::{ops::Deref, convert::AsRef};
use std::io::{self, Read};
#[allow(unused_imports)]
use std_shims::prelude::*;
use std_shims::io::{self, Read};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, Zeroizing};
use subtle::ConstantTimeEq;
pub use ciphersuite::{
digest::Digest,
group::{
ff::{Field, PrimeField},
Group,
},
Ciphersuite,
use ciphersuite::group::{
ff::{Field, PrimeField},
Group,
};
pub use ciphersuite::{digest::Digest, WrappedGroup, GroupIo, Ciphersuite};
#[cfg(any(feature = "ristretto", feature = "ed25519"))]
mod dalek;
#[cfg(feature = "ristretto")]
pub use dalek::{Ristretto, IetfRistrettoHram};
#[cfg(feature = "ed25519")]
pub use dalek::{Ed25519, IetfEd25519Hram};
#[cfg(any(feature = "ristretto", feature = "ed25519"))]
pub use dalek::*;
#[cfg(any(feature = "secp256k1", feature = "p256"))]
mod kp256;
@@ -38,11 +35,11 @@ pub(crate) use ed448::Ietf8032Ed448Hram;
/// FROST Ciphersuite.
///
/// This exclude the signing algorithm specific H2, making this solely the curve, its associated
/// This excludes the signing algorithm specific H2, making this solely the curve, its associated
/// hash function, and the functions derived from it.
pub trait Curve: Ciphersuite {
pub trait Curve: GroupIo + Ciphersuite {
/// Context string for this curve.
const CONTEXT: &'static [u8];
const CONTEXT: &[u8];
/// Hash the given dst and data to a byte vector. Used to instantiate H4 and H5.
fn hash(dst: &[u8], data: &[u8]) -> impl AsRef<[u8]> {
@@ -121,7 +118,7 @@ pub trait Curve: Ciphersuite {
/// Read a point from a reader, rejecting identity.
#[allow(non_snake_case)]
fn read_G<R: Read>(reader: &mut R) -> io::Result<Self::G> {
let res = <Self as Ciphersuite>::read_G(reader)?;
let res = <Self as GroupIo>::read_G(reader)?;
if res.is_identity().into() {
Err(io::Error::other("identity point"))?;
}

View File

@@ -1,8 +1,11 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
use core::fmt::Debug;
use std::collections::HashMap;
#[allow(unused_imports)]
use std_shims::prelude::*;
use std_shims::collections::HashMap;
use thiserror::Error;

View File

@@ -6,7 +6,9 @@
// Each nonce remains of the form (d, e) and made into a proper nonce with d + (e * b)
use core::ops::Deref;
use std::{
#[allow(unused_imports)]
use std_shims::prelude::*;
use std_shims::{
io::{self, Read, Write},
collections::HashMap,
};

View File

@@ -1,5 +1,7 @@
use core::{ops::Deref, fmt::Debug};
use std::{
#[allow(unused_imports)]
use std_shims::prelude::*;
use std_shims::{
io::{self, Read, Write},
collections::HashMap,
};
@@ -11,10 +13,9 @@ use zeroize::{Zeroize, Zeroizing};
use transcript::Transcript;
use ciphersuite::group::{
ff::{Field, PrimeField},
GroupEncoding,
};
use ciphersuite::group::{ff::PrimeField, GroupEncoding};
#[cfg(any(test, feature = "tests"))]
use ciphersuite::group::ff::Field;
use multiexp::BatchVerifier;
use crate::{
@@ -101,6 +102,7 @@ pub trait PreprocessMachine: Send {
type SignMachine: SignMachine<Self::Signature, Preprocess = Self::Preprocess>;
/// Perform the preprocessing round required in order to sign.
///
/// Returns a preprocess message to be broadcast to all participants, over an authenticated
/// channel.
fn preprocess<R: RngCore + CryptoRng>(self, rng: &mut R)
@@ -234,6 +236,8 @@ pub trait SignMachine<S>: Send + Sync + Sized {
/// Takes in the participants' preprocess messages. Returns the signature share to be broadcast
/// to all participants, over an authenticated channel. The parties who participate here will
/// become the signing set for this session.
///
/// The caller MUST only use preprocesses obtained via this machine's `read_preprocess` function.
fn sign(
self,
commitments: HashMap<Participant, Self::Preprocess>,
@@ -420,7 +424,10 @@ pub trait SignatureMachine<S>: Send + Sync {
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare>;
/// Complete signing.
///
/// Takes in everyone elses' shares. Returns the signature.
///
/// The caller MUST only use shares obtained via this machine's `read_shares` function.
fn complete(self, shares: HashMap<Participant, Self::SignatureShare>) -> Result<S, FrostError>;
}

View File

@@ -1,6 +1,6 @@
use rand_core::OsRng;
use ciphersuite::Ciphersuite;
use ciphersuite::GroupIo;
use schnorr::SchnorrSignature;

Some files were not shown because too many files have changed in this diff Show More