179 Commits

Author SHA1 Message Date
Luke Parker
c24768f922 Fix borks from the latest nightly
The `cargo doc` build started to fail with the rolling of `doc_auto_cfg` into
`doc_cfg`, so now we don't build docs for deps (as we can't reasonably update
`generic-array` at this time).

`home` has been patched as we are able to, not as a direct requirement of this
PR.
2025-11-04 13:10:11 -05:00
Luke Parker
5818f1a41c Update nightly version 2025-11-04 10:05:08 -05:00
Luke Parker
1b781b4b57 Fix CI 2025-10-07 04:39:32 -04:00
Luke Parker
63f7e220c0 Update macOS labels in CI due to deprecation of macos-13 2025-10-05 10:59:40 -04:00
Luke Parker
7d49366373 Move develop to patch-polkadot-sdk (#678)
* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
2025-10-05 10:58:08 -04:00
Luke Parker
55ed33d2d1 Update to a version of Substrate which no longer cites our fork of substrate-bip39 2025-09-30 19:30:40 -04:00
Luke Parker
0066b94d38 Update substrate-wasm-builder from serai/polkadot-sdk to serai/patch-polkadot-sdk
Steps towards allowing us to delete the `serai/polkadot-sdk` repository.
2025-09-01 21:07:11 -04:00
Luke Parker
7d54c02ec6 Update to latest nightly
Replaces #671 due to a lint being triggered.
2025-09-01 16:48:34 -04:00
Mohan
568324f631 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:58:59 -04:00
Luke Parker
eaa9a0e5a6 Pin actions in the pages workflow 2025-09-01 15:55:17 -04:00
Luke Parker
251996c1b0 Use solc 0.8.26
`next` already does, and it's annoying to have to consistently switch between
the two branches.
2025-09-01 15:55:06 -04:00
Luke Parker
98b9cc82a7 Fix Some(_) which should be Ok(_) 2025-09-01 15:42:47 -04:00
Luke Parker
f8adfb56ad Remove unwrap within debug assertion 2025-08-26 23:15:58 -04:00
Luke Parker
7a790f3a20 ff/alloc when ciphersuite/alloc 2025-08-23 11:00:05 -04:00
Luke Parker
a7c77f8b5f repr(transparent) on dalek_ff_group::FieldElement 2025-08-23 05:17:43 -04:00
Luke Parker
da3095ed15 Remove FieldElement::from_square
The new `FieldElement::from_u256` is sufficient to load an unreduced value. The
caller can perform the square themselves, without us explicitly supporting this
special case.

Updates the monero-oxide version used to one which no longer uses
`FieldElement::from_square` (as their use is why it was added).
2025-08-22 18:42:43 -04:00
Luke Parker
758d422595 Have <ed448::Point as Zeroize>::zeroize yield a well-defined value 2025-08-20 08:14:00 -04:00
Luke Parker
9841061b49 Add missing feature in substrate/client 2025-08-20 06:38:25 -04:00
Luke Parker
4122a0135f Fix dirty Cargo.lock 2025-08-20 05:20:47 -04:00
Luke Parker
b63ef32864 Smash Ciphersuite definitions into their own crates
Uses dalek-ff-group for Ed25519 and Ristretto. Uses minimal-ed448 for Ed448.
Adds ciphersuite-kp256 for Secp256k1 and P-256.
2025-08-20 05:12:36 -04:00
Luke Parker
8be03a8fc2 Fix dirty lockfile 2025-08-20 01:15:56 -04:00
Luke Parker
677a2e5749 Fix zeroization timeline in multiexp, cargo machete 2025-08-20 00:35:56 -04:00
Luke Parker
38bda1d586 dalek_ff_group::FieldElement: FromUniformBytes<64> 2025-08-20 00:23:39 -04:00
Luke Parker
2bc2ca6906 Implement FromUniformBytes<64> for dalek_ff_group::Scalar 2025-08-20 00:06:07 -04:00
Luke Parker
900a6612d7 Use std-shims to reduce flexible-transcript MSRV to 1.66
flexible-transcript already had a shim to support <1.66. This was irrelevant
since flexible-transcript had a MSRV of 1.73. Due to how clunky it was, it has
been removed despite theoretically enabling an even lower MSRV.
2025-08-19 23:43:26 -04:00
Luke Parker
17c1d5cd6b Tweak multiexp to Zeroize points when invoked in constant time, not just scalars 2025-08-19 22:28:59 -04:00
Luke Parker
8a1b56a928 Make the transcript dependency optional for schnorr-signatures
It's only required when aggregating.
2025-08-19 21:50:58 -04:00
Luke Parker
75964cf6da Place Schnorr signature aggregation behind a feature flag 2025-08-19 21:45:59 -04:00
Luke Parker
d407e35cee Fix Ciphersuite feature flagging 2025-08-19 21:42:25 -04:00
Luke Parker
c8ef044acb Version bump std-shims 2025-08-19 21:01:14 -04:00
Luke Parker
ddbc32de4d Update ciphersuite/dkg MSRVs 2025-08-19 18:20:19 -04:00
Luke Parker
e5ccfac19e Replace bespoke LazyLock/OnceLock with spin re-exports
Presumably notably slower on platforms with std, yet only when compiled with old
versions of Rust for which the option is this or no support anyways.
2025-08-19 18:10:33 -04:00
Luke Parker
432daae1d1 Polyfill extension traits for div_ceil and io::Error::other 2025-08-19 18:04:29 -04:00
Luke Parker
da3a85efe5 Only drop OnceLock value if initialized 2025-08-19 17:50:04 -04:00
Luke Parker
1e0240123d Shim LazyLock when before 1.70 2025-08-19 17:40:19 -04:00
Luke Parker
f6d4d1b084 Remove unused import, fix dirty Cargo.lock 2025-08-19 16:24:19 -04:00
Luke Parker
1b37dd2951 Shim std::sync::LazyLock for Rust < 1.80
Allows downgrading some crypto crates' MSRV to 1.79 as well.
2025-08-19 16:15:44 -04:00
Luke Parker
f32e0609f1 Add warning to dalek-ff-group 2025-08-19 15:25:40 -04:00
Luke Parker
ca85f9ba0c Remove the poorly-designed reduce_512 API
Unused and unpublished. This was only added in the FCMP++ branch as a quick fix
for performance reasons. Finding a better API is still a tricky question, but
this API is _bad_.
2025-08-19 15:24:49 -04:00
Luke Parker
cfd1cb3a37 Add FieldElement::wide_reduce to dalek-ff-group 2025-08-19 13:48:54 -04:00
Luke Parker
f2c13a0040 Expose Once within std-shims, bump spin to 0.9
This is technically a semver break due to bumping spin to 0.10, with the types
from spin being directly exposed. Long-term, we should not directly expose spin
but instead have our own types which are thin wrappers around spin (clearly
defining our API and allowing upgrading internals without breaking semver).
2025-08-19 13:36:01 -04:00
Luke Parker
961f46bc04 Add const fn to create a dalek-ff-group FieldElement 2025-08-19 13:17:39 -04:00
Luke Parker
2c4de3bab4 Bump version of ff-group-tests 2025-08-19 12:51:16 -04:00
Luke Parker
95c30720d2 Update how x coordinates are handled in bitcoin-serai 2025-08-18 14:52:29 -04:00
Luke Parker
ceede14f5c Fix misc compilation errors 2025-08-18 14:52:29 -04:00
Luke Parker
5e60ea9718 Don't offset nonces yet negate to achieve an even Y coordinate
Replaces an iterative loop with an immediate result, if action is necessary.
2025-08-18 14:52:29 -04:00
Luke Parker
153f6f2f2f Update to a monero-oxide patched to dkg 0.6 2025-08-18 14:52:29 -04:00
Luke Parker
104c0d4492 Rename ThresholdKeys::secret_share to ThresholdKeys::original_secret_share 2025-08-18 14:52:29 -04:00
Luke Parker
7c8f13ab28 Raise flexible-transcript requirement as required 2025-08-18 14:52:29 -04:00
Luke Parker
cb0deadf9a Version bump flexible-transcript 2025-08-18 14:52:29 -04:00
Luke Parker
cb489f9cef Other version bumps 2025-08-18 14:52:29 -04:00
Luke Parker
cc662cb591 Version bumps, add necessary version specifications 2025-08-18 14:52:29 -04:00
Luke Parker
a8b8844e3f Fix MSRV for simple-request 2025-08-18 14:52:29 -04:00
Luke Parker
82b543ef75 Fix clippy lint for ed448 on optional compilation path 2025-08-18 14:52:29 -04:00
Luke Parker
72e80c1a3d Update everything which uses dkg to the new APIs 2025-08-18 14:52:29 -04:00
Luke Parker
b6edc94bcd Add dealer key generation crate 2025-08-18 14:52:29 -04:00
Luke Parker
cfce2b26e2 Update READMEs, targeting an 80-character line limit 2025-08-18 14:52:29 -04:00
Luke Parker
e87bbcda64 Have modular-frost compile again 2025-08-18 14:52:29 -04:00
Luke Parker
9f84adf8b3 Smash dkg into dkg, dkg-[recovery, promote, musig, pedpop]
promote and pedpop require dleq, which don't support no-std. All three should
be moved outside the Serai repository, per #597, as none are planned for use
and worth covering under our BBP.
2025-08-18 14:52:29 -04:00
Luke Parker
3919cf55ae Extend modular-frost to test with scaled and offset keys
The transcript transcripted the group key _plus_ the offset, when it should've
only transcripted the group key as the declared group key already had the
offset applied. This has been fixed.
2025-08-18 14:52:29 -04:00
Luke Parker
38dd8cb191 Support taking arbitrary linear combinations of signing keys, not just additive offsets 2025-08-18 14:52:29 -04:00
Luke Parker
f2563d39cb Correct crypto MSRVs 2025-08-18 14:52:29 -04:00
Luke Parker
15a9cbef40 git checkout -f next ./crypto
Proceeds to remove the eVRF DKG after, only keeping what's relevant to this
branch alone.
2025-08-18 14:52:29 -04:00
Luke Parker
078d6e51e5 Re-install python3 after removal to solve unmet dependencies 2025-08-15 16:17:31 -04:00
Luke Parker
6c33e18745 Explicitly install python3 to fix build-dependencies 2025-08-15 16:14:10 -04:00
Luke Parker
b743c9a43e Update Rust version
This causes the Serai node to compile and run again.
2025-08-15 15:26:16 -04:00
Luke Parker
0c2f2979a9 Remove monero-serai, migrating to monero-oxide 2025-08-15 11:45:20 -04:00
Luke Parker
971951a1a6 Add overflow-checks even on release, per good practice 2025-08-15 10:56:28 -04:00
Luke Parker
92d9e908cb Version bumps for packages that needed to be published for monero-oxide 2025-08-15 10:56:10 -04:00
Luke Parker
a32b97be88 Move to wasm32v1-none from wasm32-unknown-unknown
Works towards fixing how the Substrate node Docker image no longer works.
2025-08-15 10:55:05 -04:00
Luke Parker
e3809b2ff1 Remove unnecessary edits to Docker config in an attempt to fix the CI 2025-08-12 01:27:28 -04:00
Luke Parker
fd2d8b4f0a Use Rust 1.89 when installing bins via cargo, version pin svm-rs
svm-rs just released a new version requiring 1.89 to compile. This process to
not install _any_ software with 1.85 to minimize how many toolchains we have in
use.
2025-08-12 01:27:28 -04:00
Luke Parker
bc81614894 Attempt Docker 24 again 2025-08-12 01:27:28 -04:00
Luke Parker
8df5aa2e2d Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-12 01:27:28 -04:00
Luke Parker
b000740470 Docker 25 since 24 doesn't have an active tag anymore 2025-08-12 01:27:28 -04:00
Luke Parker
b9f554111d Attempt to use Docker 24
Long-shot premised on an old forum post on how downgrading to Docker 24 solved
their instance of the error we face, though our conditions for it are
presumably different.
2025-08-12 01:27:28 -04:00
Luke Parker
354c408e3e Stop using an older version of Docker 2025-08-12 01:27:28 -04:00
Luke Parker
df3b60376a Restore Debian 12 Bookworm over Debian 11 Bullseye 2025-08-12 01:27:28 -04:00
Luke Parker
8d209c652e Add missing "-4" arguments to wget 2025-08-12 01:27:28 -04:00
Luke Parker
9ddad794b4 Use wget -4 for the same reason as the prior commit 2025-08-12 01:27:28 -04:00
Luke Parker
b934e484cc Replace busybox wget with wget on alpine to attempt to resolve DNS issues
See https://github.com/alpinelinux/docker-alpine/issues/155.
2025-08-12 01:27:28 -04:00
Luke Parker
f8aee9b3c8 Add overflow-checks = true recommandation to monero-serai 2025-08-12 01:27:28 -04:00
Luke Parker
f51d77d26a Fix tweaked Substrate connection code in serai-client tests 2025-08-12 01:27:28 -04:00
Luke Parker
0780deb643 Use three separate commands within the Bitcoin Dockerfile to download the release
Attempts to debug which is failing, as right now, the command as a whole is within the CI.
2025-08-12 01:27:28 -04:00
Luke Parker
75c38560f4 Bookworm -> Bullseye, except for the runtime 2025-08-12 01:27:28 -04:00
Luke Parker
9f1c5268a5 Attempt downgrading Docker from 27 to 26 2025-08-12 01:27:28 -04:00
Luke Parker
35b113768b Attempt downgrading docker from .28 to .27 2025-08-12 01:27:28 -04:00
Luke Parker
f2595c4939 Tweak how subtrate-client tests waits to connect to the Monero node 2025-08-12 01:27:28 -04:00
Luke Parker
8fcfa6d3d5 Add dedicated error for when amounts aren't representable within a u64
Fixes the issue where _inputs_ could still overflow u64::MAX and cause a panic.
2025-08-12 01:27:28 -04:00
Luke Parker
54c9d19726 Have docker install set host 2025-08-12 01:27:28 -04:00
Luke Parker
25324c3cd5 Add uidmap dependency for rootless Docker 2025-08-12 01:27:28 -04:00
Luke Parker
ecb7df85b0 if: runner.os == 'Linux', with single quotes 2025-08-12 01:27:28 -04:00
Luke Parker
68c7acdbef Attempt using rootless Docker in CI via the setup-docker-action
Restores using ubuntu-latest.

Basically, at some point in the last year the existing Docker e2e tests started
failing. I'm unclear if this is an issue with the OS, the docker packages, or
what. This just tries to find a solution.
2025-08-12 01:27:28 -04:00
Luke Parker
8b60feed92 Normalize FROM AS casing in Dockerfiles 2025-08-12 01:27:28 -04:00
Luke Parker
5c895efcd0 Downgrade tests requiring Docker from Ubuntu latest to Ubuntu 22.04
Attempts to resolve containers immediately exiting for some specific test runs.
2025-08-12 01:27:28 -04:00
Luke Parker
60e55656aa deny --hide-inclusion-graph 2025-08-12 01:27:28 -04:00
Luke Parker
9536282418 Update which deb archive to use within the runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8297d0679d Update substrate to one with a properly defined panic handler as of modern Rust 2025-08-12 01:27:28 -04:00
Luke Parker
d9f854b08a Attempt to fix install of clang within runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8aaf7f7dc6 Remove (presumably) unnecessary command to explicitly install python 2025-08-12 01:27:28 -04:00
Luke Parker
ce447558ac Update Rust versions used in orchestration 2025-08-12 01:27:28 -04:00
Luke Parker
fc850da30e Missing --allow-remove-essential flag 2025-08-12 01:27:28 -04:00
Luke Parker
d6f6cf1965 Attempt to force remove shim-signed to resolve 'unmet dependencies' issues with shim-signed 2025-08-12 01:27:28 -04:00
Luke Parker
4438b51881 Expand python packages explicitly installed 2025-08-12 01:27:28 -04:00
Luke Parker
6ae0d9fad7 Install cargo deny with Rust 1.85 and pin its version 2025-08-12 01:27:28 -04:00
Luke Parker
ad08b410a8 Pin cargo-machete to 0.8.0 to prevent other unexpected CI failures 2025-08-12 01:27:28 -04:00
Luke Parker
ec3cfd3ab7 Explicitly install python3 after removing various unnecessary packages 2025-08-12 01:27:28 -04:00
Luke Parker
01eb2daa0b Updated dated version of actions/cache 2025-08-12 01:27:28 -04:00
Luke Parker
885000f970 Add update, upgrade, fix-missing call to Ubuntu build dependencies
Attempts to fix a CI failure for some misconfiguration...
2025-08-12 01:27:28 -04:00
Luke Parker
4be506414b Install cargo machete with Rust 1.85
cargo machete now uses Rust's 2024 edition, and 1.85 was the first to ship it.
2025-08-12 01:27:28 -04:00
Luke Parker
1143d84e1d Remove msbuild from packages to remove when the CI starts
Apparently, it's no longer installed by default.
2025-08-12 01:27:28 -04:00
Luke Parker
336922101f Further harden decoy selection
It risked panicking if a non-monotonic distribution was returned. While the
provided RPC code won't return non-monotonic distributions, users are allowed
to define their own implementations and override the provided method. Said
implementations could omit this required check.
2025-08-12 01:27:28 -04:00
Luke Parker
ffa033d978 Clarify transcripting for Clsag::verify, Mlsag::verify, as with Clsag::sign 2025-08-12 01:27:28 -04:00
Luke Parker
23f986f57a Tweak the Substrate runtime as required by the Rust version bump performed 2025-08-12 01:27:28 -04:00
Luke Parker
bb726b58af Fix #654 2025-08-12 01:27:28 -04:00
Luke Parker
387615705c Fix #643 2025-08-12 01:27:28 -04:00
Luke Parker
c7f825a192 Rename Bulletproof::calculate_bp_clawback to Bulletproof::calculate_clawback 2025-08-12 01:27:28 -04:00
Luke Parker
d363b1c173 Fix #630 2025-08-12 01:27:28 -04:00
Luke Parker
d5077ae966 Respond to 13.1.1.
Uses Zeroizing for username/password in monero-simple-request-rpc.
2025-08-12 01:27:28 -04:00
Luke Parker
188fcc3cb4 Remove potentially-failing unchecked arithmetic operations for ones which error
In response to 9.13.3.

Requires a bump to Rust 1.82 to take advantage of `Option::is_none_or`.
2025-08-12 01:27:28 -04:00
Luke Parker
cbab9486c6 Clarify messages in non-debug assertions 2025-08-12 01:27:28 -04:00
Luke Parker
a5f4c450c6 Response to usage of unwrap in non-test code
This commit replaces all usage of `unwrap` with `expect` within
`networks/monero`, clarifying why the panic risked is unreachable. This commit
also replaces some uses of `unwrap` with solutions which are guaranteed not to
fail.

Notably, compilation on 128-bit systems is prevented, ensuring
`u64::try_from(usize::MAX)` will never panic at runtime.

Slight breaking changes are additionally included as necessary to massage out
some avoidable panics.
2025-08-12 01:27:28 -04:00
Luke Parker
4f65a0b147 Remove Clone from ClsagMultisigMask{Sender, Receiver}
This had ill-defined properties on Clone, as a mask could be sent multiple times
(unintended) and multiple algorithms may receive the same mask from a singular
sender.

Requires removing the Clone bound within modular-frost and expanding the test
helpers accordingly.

This was not raised in the audit yet upon independent review.
2025-08-12 01:27:28 -04:00
Luke Parker
feb18d64a7 Respond to 2 3
We now use `FrostError::InternalError` instead of a panic to represent the mask
not being set.
2025-08-12 01:27:28 -04:00
Luke Parker
cb1e6535cb Respond to 2 2 2025-08-12 01:27:28 -04:00
Luke Parker
6b8cf6653a Respond to 1.1 A2 (also cited as 2 1)
`read_vec` was unbounded. It now accepts an optional bound. In some places, we
are able to define and provide a bound (Bulletproofs(+)' `L` and `R` vectors).
In others, we cannot (the amount of inputs within a transaction, which is not
subject to any rule in the current consensus other than the total transaction
size limit). Usage of `None` in those locations preserves the existing
behavior.
2025-08-12 01:27:28 -04:00
Luke Parker
b426bfcfe8 Respond to 1.1 A1 2025-08-12 01:27:28 -04:00
Luke Parker
21ce50ecf7 Revert "Forward docker stderr to stdout in case stderr is being dropped for some reason"
This was intended for the monero-audit branch.
2025-08-10 20:53:09 -04:00
Luke Parker
a4ceb2e756 Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-10 20:50:12 -04:00
Luke Parker
eab5d9e64f Remove Mastodon link from README
Closes #662.
2025-07-12 03:29:21 -04:00
Luke Parker
e9c1235b76 Tweak how features are activated in the coins pallet tests 2024-10-30 17:15:39 -04:00
akildemir
dc1b8dfccd add coins pallet tests (#606)
* add tests

* remove unused crate

* remove serai_abi
2024-10-30 16:05:56 -04:00
Luke Parker
d0201cf2e5 Remove potentially vartime (due to cache side-channel attacks) table access in dalek-ff-group and minimal-ed448 2024-10-27 08:51:19 -04:00
Luke Parker
f3d20e60b3 Remove --no-deps from docs build to fix linking to deps 2024-10-17 21:14:13 -04:00
Luke Parker
dafba81b40 Add wasm32-unknown-unknown target to docs build 2024-10-17 18:45:34 -04:00
Luke Parker
91f8ec53d9 Add build-dependencies into docs build 2024-10-17 18:29:47 -04:00
Luke Parker
fc9a4a08b8 Correct rust-docs component name 2024-10-17 18:12:35 -04:00
Luke Parker
45fadb21ac Correct paths in pages.yml 2024-10-17 18:05:54 -04:00
Luke Parker
28619fbee1 CI fixes
Mainly corrects for https://github.com/alloy-rs/alloy/issues/1510 yet also
corrects a missing machete ignore.
2024-10-17 18:02:57 -04:00
Luke Parker
bbe014c3a7 Have CI build with doc_auto_cfg 2024-10-17 17:48:14 -04:00
Luke Parker
fb3fadb3d3 Publish Rust docs to GH pages 2024-10-17 17:18:58 -04:00
Luke Parker
f481d20773 Correct licensing for .github 2024-10-17 17:17:36 -04:00
Luke Parker
599b2dec8f cargo update
Should fix the recent CI failures re: Ethereum as well.
2024-10-09 00:39:34 -04:00
akildemir
435f1d9ae1 add specific network/coin/balance types (#619)
* add specific network/coin/balance types

* misc fixes

* fix clippy

* misc fixes

* fix pr comments

* Make halting for external networks

* fix encode/decode
2024-10-06 22:16:11 -04:00
Luke Parker
d7ecab605e Update docs gems 2024-09-25 10:37:29 -04:00
Jeffro
805fea52ec Add link for SCALE encoding in doc 2024-09-24 14:17:28 -07:00
j-berman
48db06f901 xmr: fix scan long encrypted amount 2024-09-21 08:33:35 -07:00
Luke Parker
e9d0a5e0ed Remove stray references to monero-wallet-util 2024-09-20 04:28:23 -04:00
Luke Parker
44d05518aa Add a public TransactionKeys struct to monero-wallet
monero-wallet ships an Eventuality, yet it's across the entire transaction. It
can't prove a single output's state with a traditional payment proof. By adding
this new object, another library can obtain the ephemeral randomness used and
do any/every proof they want regarding a transaction's outputs.

Necessary for https://github.com/serai-dex/serai/issues/599.
2024-09-20 04:26:21 -04:00
Luke Parker
23b433fe6c Fix #612 2024-09-20 04:05:17 -04:00
Luke Parker
2e57168a97 Update documentation on Timelocked 2024-09-20 04:01:55 -04:00
Luke Parker
5c6160c398 Kick monero-seed, polyseed, monero-wallet-util to https://github.com/kayabaNerve/monero-wallet-util 2024-09-20 03:24:33 -04:00
Luke Parker
9eee1d971e bitcoin-serai changes from next
Expands the NotEnoughFunds error and enables fetching the entire unsigned
transaction, not just the outputs it'll have.
2024-09-20 03:14:20 -04:00
Luke Parker
e6300847d6 monero-serai changes from 2edc2f3612 2024-09-20 02:42:46 -04:00
Luke Parker
e0a3e7bea6 Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-20 02:40:18 -04:00
Luke Parker
cbebaa1349 Tighten documentation on Block::number 2024-09-20 02:40:01 -04:00
Luke Parker
669b2fef72 Remove test_tweak_keys
What it tests no longer applies since tweak_keys now introduces an unspendable
script path.
2024-09-19 21:43:00 -04:00
Luke Parker
3af430d8de Use the IETF transacript in bitcoin-serai, not RecommendedTranscript
This is more likely to be interoperable in the long term.
2024-09-19 21:13:08 -04:00
Luke Parker
dfb5a053ae Resolve #611 2024-09-19 20:58:33 -04:00
Luke Parker
bdcc061bb4 Add ScannableBlock abstraction in the RPC
Makes scanning synchronous and only error upon a malicious node/unplanned for
hard fork.
2024-09-13 04:38:49 -04:00
Luke Parker
2c7148d636 Add machete exception for monero-clsag to monero-wallet 2024-09-13 02:39:43 -04:00
Luke Parker
6b270bc6aa Remove async-trait from monero-rpc 2024-09-13 02:36:53 -04:00
Luke Parker
875c669a7a Remove monero-serai multisig for just monero-[clsag, wallet] multisig 2024-09-12 18:41:35 -04:00
Luke Parker
0d399ecb28 Remove unused error in monero-address 2024-09-12 18:41:35 -04:00
Luke Parker
88440807e1 Monero v0.18.3.4 (#605)
* Monero v0.18.3.4

* Correct `check_weight_and_fee` call

* Restore empty test files so CI isn't borked
2024-09-06 01:43:31 -04:00
Luke Parker
c1a9256cc5 dockertest 0.5, correct errors from prior update commit 2024-09-05 23:31:45 -04:00
Luke Parker
0d5756ffcf cargo update, upgrade alloy
Removes a dated proc-macro-crate patch.
2024-09-05 17:03:23 -04:00
Luke Parker
ac7b98daac Remove tokio dependency from tendermint-machine
Indirects it via a minimal wrapper which can be trivially patched.
2024-09-05 16:30:27 -04:00
Luke Parker
efc7d70ab1 Clarify when wallet2 will decrypt payment IDs with citations 2024-09-05 15:50:36 -04:00
Luke Parker
4e834873d3 Lints from latest nightly
We can't adopt it due to some issue with building the runtime, but these are
good to have.
2024-09-01 16:33:44 -04:00
akildemir
a506d74d69 move economic security into it's own pallet (#596)
* move economic security into it's own pallet

* fix deny

* Update Cargo.toml, .github for the new crates

* Remove unused import

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-31 18:55:42 -04:00
Boog900
394db44b30 Monero: fix signature hash for V1 txs (#598)
* fix signature hash for V1 txs

* fix CI
2024-08-23 20:34:54 -04:00
akildemir
a2df54dd6a merge genesis complete block with genesis ended 2024-08-15 08:15:40 -07:00
akildemir
efc45c391b update emissions pallet author email 2024-08-15 08:12:47 -07:00
akildemir
cccc1fc7e6 Implement block emissions (#551)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* implement block emissions

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* fix licencing

* add fast-epoch feature

* only create the pool when adding liquidity first time

* add initial reward era test

* test whole pre ec security emissions

* fix clippy

* add swap-to-staked-sri feature

* rebase changes

* fix tests

* Remove accidentally commited ETH ABI files

* fix some pr comments

* Finish up fixing pr comments

* exclude SRI from is_allowed check

* Misc changes

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-14 23:12:04 -04:00
akildemir
bf1c493d9a add missing prevotes (#590)
* add missing prevotes

* remove the TODO

* add missing current step checks

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
2024-08-14 15:00:48 -04:00
Luke Parker
3de1e4dee2 Remove stray file in docs/ 2024-08-05 06:52:15 -04:00
Luke Parker
2591b5ade9 Update Gemfile.lock to silence a rexml disclosure 2024-08-03 02:57:56 -04:00
akildemir
e6620963c7 update author email 2024-08-02 01:50:33 -07:00
546 changed files with 13144 additions and 77069 deletions

View File

@@ -12,7 +12,7 @@ runs:
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -7,13 +7,20 @@ runs:
- name: Remove unused packages
shell: bash
run: |
sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
# Ensure the repositories are synced
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean
docker system prune -a --volumes
# Reinstall python3 as a general dependency of a functional operating system
sudo apt install -y python3 --fix-missing
if: runner.os == 'Linux'
- name: Remove unused packages
@@ -31,19 +38,48 @@ runs:
shell: bash
run: |
if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler
sudo apt install -y ca-certificates protobuf-compiler libclang-dev
elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf
brew install protobuf llvm
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi
- name: Install solc
shell: bash
run: |
cargo install svm-rs
svm install 0.8.25
svm use 0.8.25
cargo +1.89 install svm-rs --version =0.5.18
svm install 0.8.26
svm use 0.8.26
- name: Remove preinstalled Docker
shell: bash
run: |
docker system prune -a --volumes
sudo apt remove -y *docker*
# Install uidmap which will be required for the explicitly installed Docker
sudo apt install uidmap
if: runner.os == 'Linux'
- name: Update system dependencies
shell: bash
run: |
sudo apt update -y
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt clean
if: runner.os == 'Linux'
- name: Install rootless Docker
uses: docker/setup-docker-action@b60f85385d03ac8acfca6d9996982511d8620a19
with:
rootless: true
set-host: true
if: runner.os == 'Linux'
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.1
default: v0.18.3.4
runs:
using: "composite"
steps:
- name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.1
default: v0.18.3.4
runs:
using: "composite"
steps:
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,7 +5,7 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.3.1
default: v0.18.3.4
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"

View File

@@ -1 +1 @@
nightly-2024-07-01
nightly-2025-11-01

View File

@@ -27,6 +27,7 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p std-shims \
-p zalloc \
-p patchable-async-sleep \
-p serai-db \
-p serai-env \
-p simple-request

View File

@@ -32,13 +32,15 @@ jobs:
-p dalek-ff-group \
-p minimal-ed448 \
-p ciphersuite \
-p ciphersuite-kp256 \
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p generalized-bulletproofs \
-p generalized-bulletproofs-circuit-abstraction \
-p ec-divisors \
-p generalized-bulletproofs-ec-gadgets \
-p dkg \
-p dkg-recovery \
-p dkg-dealer \
-p dkg-promote \
-p dkg-musig \
-p dkg-pedpop \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -12,13 +12,13 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
run: cargo +1.89 install cargo-deny --version =0.18.3
- name: Run cargo deny
run: cargo deny -L error --all-features check
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -11,7 +11,7 @@ jobs:
clippy:
strategy:
matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest]
os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-src -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -46,16 +46,16 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
run: cargo +1.89 install cargo-deny --version =0.18.4
- name: Run cargo deny
run: cargo deny -L error --all-features check
run: cargo deny -L error --all-features check --hide-inclusion-graph
fmt:
runs-on: ubuntu-latest
@@ -79,5 +79,5 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use
run: |
cargo install cargo-machete
cargo machete
cargo +1.89 install cargo-machete --version =0.8.0
cargo +1.89 machete

View File

@@ -1,77 +0,0 @@
name: Monero Tests
on:
push:
branches:
- develop
paths:
- "networks/monero/**"
- "processor/**"
pull_request:
paths:
- "networks/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-io --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-generators --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-primitives --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-mlsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-clsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-borromean --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-bulletproofs --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-address --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-seed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package polyseed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --lib
# Doesn't run unit tests with features as the tests workflow will
integration-tests:
runs-on: ubuntu-latest
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.2.0]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.2.0' }}
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --all-features --test '*'

View File

@@ -33,19 +33,3 @@ jobs:
-p alloy-simple-request-transport \
-p ethereum-serai \
-p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \
-p monero-primitives \
-p monero-mlsag \
-p monero-clsag \
-p monero-borromean \
-p monero-bulletproofs \
-p monero-serai \
-p monero-rpc \
-p monero-simple-request-rpc \
-p monero-address \
-p monero-wallet \
-p monero-seed \
-p polyseed \
-p monero-wallet-util \
-p monero-serai-verify-chain

View File

@@ -1,6 +1,7 @@
# MIT License
#
# Copyright (c) 2022 just-the-docs
# Copyright (c) 2022-2024 Luke Parker
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@@ -20,31 +21,21 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll site to Pages
name: Deploy Rust docs and Jekyll site to Pages
on:
push:
branches:
- "develop"
paths:
- "docs/**"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
# Only allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
@@ -53,27 +44,37 @@ jobs:
# Build job
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Setup Ruby
uses: ruby/setup-ruby@v1
uses: ruby/setup-ruby@44511735964dcb71245e7e55f72539531f7bc0eb
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@v3
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b
- name: Build with Jekyll
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
JEKYLL_ENV: production
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Buld Rust docs
run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features
mv target/doc docs/_site/rust
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b
with:
path: "docs/_site/"
@@ -87,4 +88,4 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e

View File

@@ -63,6 +63,11 @@ jobs:
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \

7
.gitignore vendored
View File

@@ -1,7 +1,14 @@
target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!./Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs
.vscode

7051
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,8 @@
[workspace]
resolver = "2"
members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/rocksdb",
"patches/proc-macro-crate",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
@@ -18,6 +10,7 @@ members = [
"common/std-shims",
"common/zalloc",
"common/patchable-async-sleep",
"common/db",
"common/env",
"common/request",
@@ -28,19 +21,18 @@ members = [
"crypto/dalek-ff-group",
"crypto/ed448",
"crypto/ciphersuite",
"crypto/ciphersuite/kp256",
"crypto/multiexp",
"crypto/schnorr",
"crypto/dleq",
"crypto/evrf/secq256k1",
"crypto/evrf/embedwards25519",
"crypto/evrf/generalized-bulletproofs",
"crypto/evrf/circuit-abstraction",
"crypto/evrf/divisors",
"crypto/evrf/ec-gadgets",
"crypto/dkg",
"crypto/dkg/recovery",
"crypto/dkg/dealer",
"crypto/dkg/promote",
"crypto/dkg/musig",
"crypto/dkg/pedpop",
"crypto/frost",
"crypto/schnorrkel",
@@ -50,23 +42,6 @@ members = [
"networks/ethereum",
"networks/ethereum/relayer",
"networks/monero/io",
"networks/monero/generators",
"networks/monero/primitives",
"networks/monero/ringct/mlsag",
"networks/monero/ringct/clsag",
"networks/monero/ringct/borromean",
"networks/monero/ringct/bulletproofs",
"networks/monero",
"networks/monero/rpc",
"networks/monero/rpc/simple-request",
"networks/monero/wallet/address",
"networks/monero/wallet",
"networks/monero/wallet/seed",
"networks/monero/wallet/polyseed",
"networks/monero/wallet/util",
"networks/monero/verify-chain",
"message-queue",
"processor/messages",
@@ -86,6 +61,14 @@ members = [
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
@@ -117,56 +100,37 @@ members = [
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
ff = { opt-level = 3 }
group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 }
secp256k1 = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
ec-divisors = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
dkg = { opt-level = 3 }
monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
monero-oxide = { opt-level = 3 }
[profile.release]
panic = "unwind"
overflow-checks = true
[patch.crates-io]
# Dependencies from monero-oxide which originate from within our own tree
std-shims = { path = "common/std-shims" }
simple-request = { path = "common/request" }
dalek-ff-group = { path = "crypto/dalek-ff-group" }
flexible-transcript = { path = "crypto/transcript" }
modular-frost = { path = "crypto/frost" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# Needed due to dockertest's usage of `Rc`s when we need `Arc`s
dockertest = { git = "https://github.com/orcalabs/dockertest-rs", rev = "4dd6ae24738aa6dc5c89444cc822ea4745517493" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# proc-macro-crate 2 binds to an old version of toml for msrv so we patch to 3
proc-macro-crate = { path = "patches/proc-macro-crate" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
# These have `std` alternatives
matches = { path = "patches/matches" }
home = { path = "patches/home" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
@@ -176,11 +140,11 @@ matches = { path = "patches/matches" }
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# The official pasta_curves repo doesn't support Zeroize
pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" }
[workspace.lints.clippy]
uninlined_format_args = "allow" # TODO
unwrap_or_default = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
@@ -205,14 +169,14 @@ large_stack_arrays = "deny"
linkedlist = "deny"
macro_use_imports = "deny"
manual_instant_elapsed = "deny"
manual_let_else = "deny"
# TODO manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
needless_continue = "deny"
# TODO needless_continue = "deny"
needless_pass_by_value = "deny"
ptr_cast_constness = "deny"
range_minus_one = "deny"
@@ -220,8 +184,7 @@ range_plus_one = "deny"
redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny"
unchecked_time_subtraction = "deny"
unnecessary_box_returns = "deny"
unnecessary_join = "deny"
unnecessary_wraps = "deny"
@@ -229,3 +192,21 @@ unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
zero_sized_map_values = "deny"
# TODO: These were incurred when updating Rust as necessary for compilation, yet aren't being fixed
# at this time due to the impacts it'd have throughout the repository (when this isn't actively the
# primary branch, `next` is)
needless_continue = "allow"
needless_lifetimes = "allow"
useless_conversion = "allow"
empty_line_after_doc_comments = "allow"
manual_div_ceil = "allow"
manual_let_else = "allow"
unnecessary_map_or = "allow"
result_large_err = "allow"
unneeded_struct_pattern = "allow"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648
mismatched_lifetime_syntaxes = "allow"
unused_attributes = "allow"
unused_parens = "allow"

View File

@@ -5,4 +5,4 @@ a full copy of the AGPL-3.0 License is included in the root of this repository
as a reference text. This copy should be provided with any distribution of a
crate licensed under the AGPL-3.0, as per its terms.
The GitHub actions (`.github/actions`) are licensed under the MIT license.
The GitHub actions/workflows (`.github`) are licensed under the MIT license.

View File

@@ -59,7 +59,6 @@ issued at the discretion of the Immunefi program managers.
- [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/

View File

@@ -18,7 +18,7 @@ workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.21", default-features = false, features = ["zstd"], optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features]
parity-db = ["dep:parity-db"]

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
// Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> {

View File

@@ -0,0 +1,19 @@
[package]
name = "patchable-async-sleep"
version = "0.1.0"
description = "An async sleep function, patchable to the preferred runtime"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-async-sleep"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
tokio = { version = "1", default-features = false, features = [ "time"] }

View File

@@ -0,0 +1,7 @@
# Patchable Async Sleep
An async sleep function, patchable to the preferred runtime.
This crate is `tokio`-backed. Applications which don't want to use `tokio`
should patch this crate to one which works witht heir preferred runtime. The
point of it is to have a minimal API surface to trivially facilitate such work.

View File

@@ -0,0 +1,10 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::time::Duration;
/// Sleep for the specified duration.
pub fn sleep(duration: Duration) -> impl core::future::Future<Output = ()> {
tokio::time::sleep(duration)
}

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-requ
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021"
rust-version = "1.64"
rust-version = "1.70"
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
use std::sync::Arc;

View File

@@ -1,13 +1,13 @@
[package]
name = "std-shims"
version = "0.1.1"
version = "0.1.4"
description = "A series of std shims to make alloc more feasible"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021"
rust-version = "1.70"
rust-version = "1.64"
[package.metadata.docs.rs]
all-features = true
@@ -17,7 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] }
rustversion = { version = "1", default-features = false }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "once", "lazy"] }
hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] }
[features]

View File

@@ -3,4 +3,9 @@
A crate which passes through to std when the default `std` feature is enabled,
yet provides a series of shims when it isn't.
`HashSet` and `HashMap` are provided via `hashbrown`.
No guarantee of one-to-one parity is provided. The shims provided aim to be sufficient for the
average case.
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization primitives are provided via
`spin` (avoiding a requirement on `critical-section`).
types are not guaranteed to be

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
@@ -11,3 +11,64 @@ pub mod io;
pub use alloc::vec;
pub use alloc::str;
pub use alloc::string;
pub mod prelude {
#[rustversion::before(1.73)]
#[doc(hidden)]
pub trait StdShimsDivCeil {
fn div_ceil(self, rhs: Self) -> Self;
}
#[rustversion::before(1.73)]
mod impl_divceil {
use super::StdShimsDivCeil;
impl StdShimsDivCeil for u8 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u16 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u32 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u64 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u128 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for usize {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
}
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
#[doc(hidden)]
pub trait StdShimsIoErrorOther {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>;
}
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
impl StdShimsIoErrorOther for std::io::Error {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>,
{
std::io::Error::new(std::io::ErrorKind::Other, error)
}
}
}

View File

@@ -25,7 +25,11 @@ mod mutex_shim {
}
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[cfg(feature = "std")]
pub use std::sync::LazyLock;
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::before(1.80)]
#[cfg(feature = "std")]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(feature = "std")]
pub use std::sync::LazyLock;

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.77.0"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))]
//! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation.

View File

@@ -20,14 +20,15 @@ workspace = true
async-trait = { version = "0.1", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["std"] }
bitvec = { version = "1", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] }
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] }
ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std", "aggregate"] }
dkg-musig = { path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" }
@@ -41,7 +42,7 @@ processor-messages = { package = "serai-processor-messages", path = "../processo
message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary = { package = "tributary-chain", path = "./tributary" }
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
@@ -56,8 +57,8 @@ libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp
[dev-dependencies]
tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] }
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-runtime = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
sp-runtime = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
[features]
longer-reattempts = []

View File

@@ -12,9 +12,9 @@ use tokio::{
use borsh::BorshSerialize;
use sp_application_crypto::RuntimePublic;
use serai_client::{
primitives::{NETWORKS, NetworkId, Signature},
validator_sets::primitives::{Session, ValidatorSet},
SeraiError, TemporalSerai, Serai,
primitives::{ExternalNetworkId, EXTERNAL_NETWORKS},
validator_sets::primitives::{ExternalValidatorSet, Session},
Serai, SeraiError, TemporalSerai,
};
use serai_db::{Get, DbTxn, Db, create_db};
@@ -28,17 +28,17 @@ use crate::{
create_db! {
CosignDb {
ReceivedCosign: (set: ValidatorSet, block: [u8; 32]) -> CosignedBlock,
LatestCosign: (network: NetworkId) -> CosignedBlock,
DistinctChain: (set: ValidatorSet) -> (),
ReceivedCosign: (set: ExternalValidatorSet, block: [u8; 32]) -> CosignedBlock,
LatestCosign: (network: ExternalNetworkId) -> CosignedBlock,
DistinctChain: (set: ExternalValidatorSet) -> (),
}
}
pub struct CosignEvaluator<D: Db> {
db: Mutex<D>,
serai: Arc<Serai>,
stakes: RwLock<Option<HashMap<NetworkId, u64>>>,
latest_cosigns: RwLock<HashMap<NetworkId, CosignedBlock>>,
stakes: RwLock<Option<HashMap<ExternalNetworkId, u64>>>,
latest_cosigns: RwLock<HashMap<ExternalNetworkId, CosignedBlock>>,
}
impl<D: Db> CosignEvaluator<D> {
@@ -79,7 +79,7 @@ impl<D: Db> CosignEvaluator<D> {
let serai = self.serai.as_of_latest_finalized_block().await?;
let mut stakes = HashMap::new();
for network in NETWORKS {
for network in EXTERNAL_NETWORKS {
// Use if this network has published a Batch for a short-circuit of if they've ever set a key
let set_key = serai.in_instructions().last_batch_for_network(network).await?.is_some();
if set_key {
@@ -87,7 +87,7 @@ impl<D: Db> CosignEvaluator<D> {
network,
serai
.validator_sets()
.total_allocated_stake(network)
.total_allocated_stake(network.into())
.await?
.expect("network which published a batch didn't have a stake set")
.0,
@@ -126,9 +126,9 @@ impl<D: Db> CosignEvaluator<D> {
async fn set_with_keys_fn(
serai: &TemporalSerai<'_>,
network: NetworkId,
) -> Result<Option<ValidatorSet>, SeraiError> {
let Some(latest_session) = serai.validator_sets().session(network).await? else {
network: ExternalNetworkId,
) -> Result<Option<ExternalValidatorSet>, SeraiError> {
let Some(latest_session) = serai.validator_sets().session(network.into()).await? else {
log::warn!("received cosign from {:?}, which doesn't yet have a session", network);
return Ok(None);
};
@@ -136,13 +136,13 @@ impl<D: Db> CosignEvaluator<D> {
Ok(Some(
if serai
.validator_sets()
.keys(ValidatorSet { network, session: prior_session })
.keys(ExternalValidatorSet { network, session: prior_session })
.await?
.is_some()
{
ValidatorSet { network, session: prior_session }
ExternalValidatorSet { network, session: prior_session }
} else {
ValidatorSet { network, session: latest_session }
ExternalValidatorSet { network, session: latest_session }
},
))
}
@@ -164,7 +164,7 @@ impl<D: Db> CosignEvaluator<D> {
if !keys
.0
.verify(&cosign_block_msg(cosign.block_number, cosign.block), &Signature(cosign.signature))
.verify(&cosign_block_msg(cosign.block_number, cosign.block), &cosign.signature.into())
{
log::warn!("received cosigned block with an invalid signature");
return Ok(());
@@ -204,16 +204,12 @@ impl<D: Db> CosignEvaluator<D> {
let mut total_stake = 0;
let mut total_on_distinct_chain = 0;
for network in NETWORKS {
if network == NetworkId::Serai {
continue;
}
for network in EXTERNAL_NETWORKS {
// Get the current set for this network
let set_with_keys = {
let mut res;
while {
res = set_with_keys_fn(&serai, cosign.network).await;
res = set_with_keys_fn(&serai, network).await;
res.is_err()
} {
log::error!(
@@ -231,7 +227,8 @@ impl<D: Db> CosignEvaluator<D> {
let stake = {
let mut res;
while {
res = serai.validator_sets().total_allocated_stake(set_with_keys.network).await;
res =
serai.validator_sets().total_allocated_stake(set_with_keys.network.into()).await;
res.is_err()
} {
log::error!(
@@ -271,7 +268,7 @@ impl<D: Db> CosignEvaluator<D> {
#[allow(clippy::new_ret_no_self)]
pub fn new<P: P2p>(db: D, p2p: P, serai: Arc<Serai>) -> mpsc::UnboundedSender<CosignedBlock> {
let mut latest_cosigns = HashMap::new();
for network in NETWORKS {
for network in EXTERNAL_NETWORKS {
if let Some(cosign) = LatestCosign::get(&db, network) {
latest_cosigns.insert(network, cosign);
}

View File

@@ -6,9 +6,9 @@ use blake2::{
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
primitives::NetworkId,
validator_sets::primitives::{Session, ValidatorSet},
in_instructions::primitives::{Batch, SignedBatch},
primitives::ExternalNetworkId,
validator_sets::primitives::{ExternalValidatorSet, Session},
};
pub use serai_db::*;
@@ -18,21 +18,21 @@ use crate::tributary::{TributarySpec, Transaction, scanner::RecognizedIdType};
create_db!(
MainDb {
HandledMessageDb: (network: NetworkId) -> u64,
HandledMessageDb: (network: ExternalNetworkId) -> u64,
ActiveTributaryDb: () -> Vec<u8>,
RetiredTributaryDb: (set: ValidatorSet) -> (),
RetiredTributaryDb: (set: ExternalValidatorSet) -> (),
FirstPreprocessDb: (
network: NetworkId,
network: ExternalNetworkId,
id_type: RecognizedIdType,
id: &[u8]
) -> Vec<Vec<u8>>,
LastReceivedBatchDb: (network: NetworkId) -> u32,
ExpectedBatchDb: (network: NetworkId, id: u32) -> [u8; 32],
BatchDb: (network: NetworkId, id: u32) -> SignedBatch,
LastVerifiedBatchDb: (network: NetworkId) -> u32,
HandoverBatchDb: (set: ValidatorSet) -> u32,
LookupHandoverBatchDb: (network: NetworkId, batch: u32) -> Session,
QueuedBatchesDb: (set: ValidatorSet) -> Vec<u8>
LastReceivedBatchDb: (network: ExternalNetworkId) -> u32,
ExpectedBatchDb: (network: ExternalNetworkId, id: u32) -> [u8; 32],
BatchDb: (network: ExternalNetworkId, id: u32) -> SignedBatch,
LastVerifiedBatchDb: (network: ExternalNetworkId) -> u32,
HandoverBatchDb: (set: ExternalValidatorSet) -> u32,
LookupHandoverBatchDb: (network: ExternalNetworkId, batch: u32) -> Session,
QueuedBatchesDb: (set: ExternalValidatorSet) -> Vec<u8>
}
);
@@ -61,7 +61,7 @@ impl ActiveTributaryDb {
ActiveTributaryDb::set(txn, &existing_bytes);
}
pub fn retire_tributary(txn: &mut impl DbTxn, set: ValidatorSet) {
pub fn retire_tributary(txn: &mut impl DbTxn, set: ExternalValidatorSet) {
let mut active = Self::active_tributaries(txn).1;
for i in 0 .. active.len() {
if active[i].set() == set {
@@ -82,7 +82,7 @@ impl ActiveTributaryDb {
impl FirstPreprocessDb {
pub fn save_first_preprocess(
txn: &mut impl DbTxn,
network: NetworkId,
network: ExternalNetworkId,
id_type: RecognizedIdType,
id: &[u8],
preprocess: &Vec<Vec<u8>>,
@@ -108,19 +108,19 @@ impl ExpectedBatchDb {
}
impl HandoverBatchDb {
pub fn set_handover_batch(txn: &mut impl DbTxn, set: ValidatorSet, batch: u32) {
pub fn set_handover_batch(txn: &mut impl DbTxn, set: ExternalValidatorSet, batch: u32) {
Self::set(txn, set, &batch);
LookupHandoverBatchDb::set(txn, set.network, batch, &set.session);
}
}
impl QueuedBatchesDb {
pub fn queue(txn: &mut impl DbTxn, set: ValidatorSet, batch: &Transaction) {
pub fn queue(txn: &mut impl DbTxn, set: ExternalValidatorSet, batch: &Transaction) {
let mut batches = Self::get(txn, set).unwrap_or_default();
batch.write(&mut batches).unwrap();
Self::set(txn, set, &batches);
}
pub fn take(txn: &mut impl DbTxn, set: ValidatorSet) -> Vec<Transaction> {
pub fn take(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Vec<Transaction> {
let batches_vec = Self::get(txn, set).unwrap_or_default();
txn.del(Self::key(set));

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use core::ops::Deref;
use std::{
sync::{OnceLock, Arc},
@@ -8,22 +10,24 @@ use std::{
use zeroize::{Zeroize, Zeroizing};
use rand_core::OsRng;
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{
ff::{Field, PrimeField},
GroupEncoding,
},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;
use frost::Participant;
use serai_db::{DbTxn, Db};
use scale::Encode;
use borsh::BorshSerialize;
use serai_client::{
primitives::NetworkId,
validator_sets::primitives::{Session, ValidatorSet, KeyPair},
primitives::ExternalNetworkId,
validator_sets::primitives::{ExternalValidatorSet, KeyPair, Session},
Public, Serai, SeraiInInstructions,
};
@@ -78,7 +82,7 @@ pub struct ActiveTributary<D: Db, P: P2p> {
#[derive(Clone)]
pub enum TributaryEvent<D: Db, P: P2p> {
NewTributary(ActiveTributary<D, P>),
TributaryRetired(ValidatorSet),
TributaryRetired(ExternalValidatorSet),
}
// Creates a new tributary and sends it to all listeners.
@@ -113,17 +117,16 @@ async fn add_tributary<D: Db, Pro: Processors, P: P2p>(
// If we're rebooting, we'll re-fire this message
// This is safe due to the message-queue deduplicating based off the intent system
let set = spec.set();
let our_i = spec
.i(&[], Ristretto::generator() * key.deref())
.expect("adding a tributary for a set we aren't in set for");
processors
.send(
set.network,
processor_messages::key_gen::CoordinatorMessage::GenerateKey {
session: set.session,
threshold: spec.t(),
evrf_public_keys: spec.evrf_public_keys(),
// TODO
// params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(),
// shares: u16::from(our_i.end) - u16::from(our_i.start),
id: processor_messages::key_gen::KeyGenId { session: set.session, attempt: 0 },
params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(),
shares: u16::from(our_i.end) - u16::from(our_i.start),
},
)
.await;
@@ -145,7 +148,7 @@ async fn handle_processor_message<D: Db, P: P2p>(
p2p: &P,
cosign_channel: &mpsc::UnboundedSender<CosignedBlock>,
tributaries: &HashMap<Session, ActiveTributary<D, P>>,
network: NetworkId,
network: ExternalNetworkId,
msg: &processors::Message,
) -> bool {
#[allow(clippy::nonminimal_bool)]
@@ -166,9 +169,12 @@ async fn handle_processor_message<D: Db, P: P2p>(
// We'll only receive these if we fired GenerateKey, which we'll only do if if we're
// in-set, making the Tributary relevant
ProcessorMessage::KeyGen(inner_msg) => match inner_msg {
key_gen::ProcessorMessage::Participation { session, .. } |
key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } |
key_gen::ProcessorMessage::Blame { session, .. } => Some(*session),
key_gen::ProcessorMessage::Commitments { id, .. } |
key_gen::ProcessorMessage::InvalidCommitments { id, .. } |
key_gen::ProcessorMessage::Shares { id, .. } |
key_gen::ProcessorMessage::InvalidShare { id, .. } |
key_gen::ProcessorMessage::GeneratedKeyPair { id, .. } |
key_gen::ProcessorMessage::Blame { id, .. } => Some(id.session),
},
ProcessorMessage::Sign(inner_msg) => match inner_msg {
// We'll only receive InvalidParticipant/Preprocess/Share if we're actively signing
@@ -190,7 +196,8 @@ async fn handle_processor_message<D: Db, P: P2p>(
.iter()
.map(|plan| plan.session)
.filter(|session| {
RetiredTributaryDb::get(&txn, ValidatorSet { network, session: *session }).is_none()
RetiredTributaryDb::get(&txn, ExternalValidatorSet { network, session: *session })
.is_none()
})
.collect::<HashSet<_>>();
@@ -262,14 +269,17 @@ async fn handle_processor_message<D: Db, P: P2p>(
}
// This causes an action on Substrate yet not on any Tributary
coordinator::ProcessorMessage::SignedSlashReport { session, signature } => {
let set = ValidatorSet { network, session: *session };
let set = ExternalValidatorSet { network, session: *session };
let signature: &[u8] = signature.as_ref();
let signature = serai_client::Signature(signature.try_into().unwrap());
let signature = <[u8; 64]>::try_from(signature).unwrap();
let signature: serai_client::Signature = signature.into();
let slashes = crate::tributary::SlashReport::get(&txn, set)
.expect("signed slash report despite not having slash report locally");
let slashes_pubs =
slashes.iter().map(|(address, points)| (Public(*address), *points)).collect::<Vec<_>>();
let slashes_pubs = slashes
.iter()
.map(|(address, points)| (Public::from(*address), *points))
.collect::<Vec<_>>();
let tx = serai_client::SeraiValidatorSets::report_slashes(
network,
@@ -279,7 +289,7 @@ async fn handle_processor_message<D: Db, P: P2p>(
.collect::<Vec<_>>()
.try_into()
.unwrap(),
signature.clone(),
signature,
);
loop {
@@ -390,7 +400,7 @@ async fn handle_processor_message<D: Db, P: P2p>(
if let Some(relevant_tributary_value) = relevant_tributary {
if RetiredTributaryDb::get(
&txn,
ValidatorSet { network: msg.network, session: relevant_tributary_value },
ExternalValidatorSet { network: msg.network, session: relevant_tributary_value },
)
.is_some()
{
@@ -418,32 +428,124 @@ async fn handle_processor_message<D: Db, P: P2p>(
let txs = match msg.msg.clone() {
ProcessorMessage::KeyGen(inner_msg) => match inner_msg {
key_gen::ProcessorMessage::Participation { session, participation } => {
assert_eq!(session, spec.set().session);
vec![Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() }]
}
key_gen::ProcessorMessage::GeneratedKeyPair { session, substrate_key, network_key } => {
assert_eq!(session, spec.set().session);
crate::tributary::generated_key_pair::<D>(
&mut txn,
genesis,
&KeyPair(Public(substrate_key), network_key.try_into().unwrap()),
);
// Create a MuSig-based machine to inform Substrate of this key generation
let confirmation_nonces =
crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, 0);
vec![Transaction::DkgConfirmationNonces {
attempt: 0,
confirmation_nonces,
key_gen::ProcessorMessage::Commitments { id, commitments } => {
vec![Transaction::DkgCommitments {
attempt: id.attempt,
commitments,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::Blame { session, participant } => {
assert_eq!(session, spec.set().session);
let participant = spec.reverse_lookup_i(participant).unwrap();
vec![Transaction::RemoveParticipant { participant, signed: Transaction::empty_signed() }]
key_gen::ProcessorMessage::InvalidCommitments { id, faulty } => {
// This doesn't have guaranteed timing
//
// While the party *should* be fatally slashed and not included in future attempts,
// they'll actually be fatally slashed (assuming liveness before the Tributary retires)
// and not included in future attempts *which begin after the latency window completes*
let participant = spec
.reverse_lookup_i(
&crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt)
.expect("participating in DKG attempt yet we didn't save who was removed"),
faulty,
)
.unwrap();
vec![Transaction::RemoveParticipantDueToDkg {
participant,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::Shares { id, mut shares } => {
// Create a MuSig-based machine to inform Substrate of this key generation
let nonces = crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, id.attempt);
let removed = crate::tributary::removed_as_of_dkg_attempt(&txn, genesis, id.attempt)
.expect("participating in a DKG attempt yet we didn't track who was removed yet?");
let our_i = spec
.i(&removed, pub_key)
.expect("processor message to DKG for an attempt we aren't a validator in");
// `tx_shares` needs to be done here as while it can be serialized from the HashMap
// without further context, it can't be deserialized without context
let mut tx_shares = Vec::with_capacity(shares.len());
for shares in &mut shares {
tx_shares.push(vec![]);
for i in 1 ..= spec.n(&removed) {
let i = Participant::new(i).unwrap();
if our_i.contains(&i) {
if shares.contains_key(&i) {
panic!("processor sent us our own shares");
}
continue;
}
tx_shares.last_mut().unwrap().push(
shares.remove(&i).expect("processor didn't send share for another validator"),
);
}
}
vec![Transaction::DkgShares {
attempt: id.attempt,
shares: tx_shares,
confirmation_nonces: nonces,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::InvalidShare { id, accuser, faulty, blame } => {
vec![Transaction::InvalidDkgShare {
attempt: id.attempt,
accuser,
faulty,
blame,
signed: Transaction::empty_signed(),
}]
}
key_gen::ProcessorMessage::GeneratedKeyPair { id, substrate_key, network_key } => {
// TODO2: Check the KeyGenId fields
// Tell the Tributary the key pair, get back the share for the MuSig signature
let share = crate::tributary::generated_key_pair::<D>(
&mut txn,
key,
spec,
&KeyPair(Public::from(substrate_key), network_key.try_into().unwrap()),
id.attempt,
);
// TODO: Move this into generated_key_pair?
match share {
Ok(share) => {
vec![Transaction::DkgConfirmed {
attempt: id.attempt,
confirmation_share: share,
signed: Transaction::empty_signed(),
}]
}
Err(p) => {
let participant = spec
.reverse_lookup_i(
&crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt)
.expect("participating in DKG attempt yet we didn't save who was removed"),
p,
)
.unwrap();
vec![Transaction::RemoveParticipantDueToDkg {
participant,
signed: Transaction::empty_signed(),
}]
}
}
}
key_gen::ProcessorMessage::Blame { id, participant } => {
let participant = spec
.reverse_lookup_i(
&crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt)
.expect("participating in DKG attempt yet we didn't save who was removed"),
participant,
)
.unwrap();
vec![Transaction::RemoveParticipantDueToDkg {
participant,
signed: Transaction::empty_signed(),
}]
}
},
ProcessorMessage::Sign(msg) => match msg {
@@ -687,7 +789,7 @@ async fn handle_processor_messages<D: Db, Pro: Processors, P: P2p>(
processors: Pro,
p2p: P,
cosign_channel: mpsc::UnboundedSender<CosignedBlock>,
network: NetworkId,
network: ExternalNetworkId,
mut tributary_event: mpsc::UnboundedReceiver<TributaryEvent<D, P>>,
) {
let mut tributaries = HashMap::new();
@@ -736,7 +838,7 @@ async fn handle_processor_messages<D: Db, Pro: Processors, P: P2p>(
#[allow(clippy::too_many_arguments)]
async fn handle_cosigns_and_batch_publication<D: Db, P: P2p>(
mut db: D,
network: NetworkId,
network: ExternalNetworkId,
mut tributary_event: mpsc::UnboundedReceiver<TributaryEvent<D, P>>,
) {
let mut tributaries = HashMap::new();
@@ -810,7 +912,7 @@ async fn handle_cosigns_and_batch_publication<D: Db, P: P2p>(
for batch in start_id ..= last_id {
let is_pre_handover = LookupHandoverBatchDb::get(&txn, network, batch + 1);
if let Some(session) = is_pre_handover {
let set = ValidatorSet { network, session };
let set = ExternalValidatorSet { network, session };
let mut queued = QueuedBatchesDb::take(&mut txn, set);
// is_handover_batch is only set for handover `Batch`s we're participating in, making
// this safe
@@ -828,7 +930,8 @@ async fn handle_cosigns_and_batch_publication<D: Db, P: P2p>(
let is_handover = LookupHandoverBatchDb::get(&txn, network, batch);
if let Some(session) = is_handover {
for queued in QueuedBatchesDb::take(&mut txn, ValidatorSet { network, session }) {
for queued in QueuedBatchesDb::take(&mut txn, ExternalValidatorSet { network, session })
{
to_publish.push((session, queued));
}
}
@@ -875,10 +978,7 @@ pub async fn handle_processors<D: Db, Pro: Processors, P: P2p>(
mut tributary_event: broadcast::Receiver<TributaryEvent<D, P>>,
) {
let mut channels = HashMap::new();
for network in serai_client::primitives::NETWORKS {
if network == NetworkId::Serai {
continue;
}
for network in serai_client::primitives::EXTERNAL_NETWORKS {
let (processor_send, processor_recv) = mpsc::unbounded_channel();
tokio::spawn(handle_processor_messages(
db.clone(),
@@ -1100,7 +1200,7 @@ pub async fn run<D: Db, Pro: Processors, P: P2p>(
}
});
move |set: ValidatorSet, genesis, id_type, id: Vec<u8>| {
move |set: ExternalValidatorSet, genesis, id_type, id: Vec<u8>| {
log::debug!("recognized ID {:?} {}", id_type, hex::encode(&id));
let mut raw_db = raw_db.clone();
let key = key.clone();

View File

@@ -11,7 +11,9 @@ use rand_core::{RngCore, OsRng};
use scale::{Decode, Encode};
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet, Serai};
use serai_client::{
primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet, Serai,
};
use serai_db::Db;
@@ -69,7 +71,7 @@ const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1;
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignedBlock {
pub network: NetworkId,
pub network: ExternalNetworkId,
pub block_number: u64,
pub block: [u8; 32],
pub signature: [u8; 64],
@@ -208,8 +210,8 @@ pub struct HeartbeatBatch {
pub trait P2p: Send + Sync + Clone + fmt::Debug + TributaryP2p {
type Id: Send + Sync + Clone + Copy + fmt::Debug;
async fn subscribe(&self, set: ValidatorSet, genesis: [u8; 32]);
async fn unsubscribe(&self, set: ValidatorSet, genesis: [u8; 32]);
async fn subscribe(&self, set: ExternalValidatorSet, genesis: [u8; 32]);
async fn unsubscribe(&self, set: ExternalValidatorSet, genesis: [u8; 32]);
async fn send_raw(&self, to: Self::Id, msg: Vec<u8>);
async fn broadcast_raw(&self, kind: P2pMessageKind, msg: Vec<u8>);
@@ -309,7 +311,7 @@ struct Behavior {
#[allow(clippy::type_complexity)]
#[derive(Clone)]
pub struct LibP2p {
subscribe: Arc<Mutex<mpsc::UnboundedSender<(bool, ValidatorSet, [u8; 32])>>>,
subscribe: Arc<Mutex<mpsc::UnboundedSender<(bool, ExternalValidatorSet, [u8; 32])>>>,
send: Arc<Mutex<mpsc::UnboundedSender<(PeerId, Vec<u8>)>>>,
broadcast: Arc<Mutex<mpsc::UnboundedSender<(P2pMessageKind, Vec<u8>)>>>,
receive: Arc<Mutex<mpsc::UnboundedReceiver<Message<Self>>>>,
@@ -397,7 +399,7 @@ impl LibP2p {
let (receive_send, receive_recv) = mpsc::unbounded_channel();
let (subscribe_send, mut subscribe_recv) = mpsc::unbounded_channel();
fn topic_for_set(set: ValidatorSet) -> IdentTopic {
fn topic_for_set(set: ExternalValidatorSet) -> IdentTopic {
IdentTopic::new(format!("{LIBP2P_TOPIC}-{}", hex::encode(set.encode())))
}
@@ -407,7 +409,8 @@ impl LibP2p {
// The addrs we're currently dialing, and the networks associated with them
let dialing_peers = Arc::new(RwLock::new(HashMap::new()));
// The peers we're currently connected to, and the networks associated with them
let connected_peers = Arc::new(RwLock::new(HashMap::<Multiaddr, HashSet<NetworkId>>::new()));
let connected_peers =
Arc::new(RwLock::new(HashMap::<Multiaddr, HashSet<ExternalNetworkId>>::new()));
// Find and connect to peers
let (connect_to_network_send, mut connect_to_network_recv) =
@@ -420,7 +423,7 @@ impl LibP2p {
let connect_to_network_send = connect_to_network_send.clone();
async move {
loop {
let connect = |network: NetworkId, addr: Multiaddr| {
let connect = |network: ExternalNetworkId, addr: Multiaddr| {
let dialing_peers = dialing_peers.clone();
let connected_peers = connected_peers.clone();
let to_dial_send = to_dial_send.clone();
@@ -507,7 +510,7 @@ impl LibP2p {
connect_to_network_networks.insert(network);
}
for network in connect_to_network_networks {
if let Ok(mut nodes) = serai.p2p_validators(network).await {
if let Ok(mut nodes) = serai.p2p_validators(network.into()).await {
// If there's an insufficient amount of nodes known, connect to all yet add it
// back and break
if nodes.len() < TARGET_PEERS {
@@ -557,7 +560,7 @@ impl LibP2p {
// Subscribe to any new topics
set = subscribe_recv.recv() => {
let (subscribe, set, genesis): (_, ValidatorSet, [u8; 32]) =
let (subscribe, set, genesis): (_, ExternalValidatorSet, [u8; 32]) =
set.expect("subscribe_recv closed. are we shutting down?");
let topic = topic_for_set(set);
if subscribe {
@@ -776,7 +779,7 @@ impl LibP2p {
impl P2p for LibP2p {
type Id = PeerId;
async fn subscribe(&self, set: ValidatorSet, genesis: [u8; 32]) {
async fn subscribe(&self, set: ExternalValidatorSet, genesis: [u8; 32]) {
self
.subscribe
.lock()
@@ -785,7 +788,7 @@ impl P2p for LibP2p {
.expect("subscribe_send closed. are we shutting down?");
}
async fn unsubscribe(&self, set: ValidatorSet, genesis: [u8; 32]) {
async fn unsubscribe(&self, set: ExternalValidatorSet, genesis: [u8; 32]) {
self
.subscribe
.lock()

View File

@@ -1,6 +1,6 @@
use std::sync::Arc;
use serai_client::primitives::NetworkId;
use serai_client::primitives::ExternalNetworkId;
use processor_messages::{ProcessorMessage, CoordinatorMessage};
use message_queue::{Service, Metadata, client::MessageQueue};
@@ -8,27 +8,27 @@ use message_queue::{Service, Metadata, client::MessageQueue};
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Message {
pub id: u64,
pub network: NetworkId,
pub network: ExternalNetworkId,
pub msg: ProcessorMessage,
}
#[async_trait::async_trait]
pub trait Processors: 'static + Send + Sync + Clone {
async fn send(&self, network: NetworkId, msg: impl Send + Into<CoordinatorMessage>);
async fn recv(&self, network: NetworkId) -> Message;
async fn send(&self, network: ExternalNetworkId, msg: impl Send + Into<CoordinatorMessage>);
async fn recv(&self, network: ExternalNetworkId) -> Message;
async fn ack(&self, msg: Message);
}
#[async_trait::async_trait]
impl Processors for Arc<MessageQueue> {
async fn send(&self, network: NetworkId, msg: impl Send + Into<CoordinatorMessage>) {
async fn send(&self, network: ExternalNetworkId, msg: impl Send + Into<CoordinatorMessage>) {
let msg: CoordinatorMessage = msg.into();
let metadata =
Metadata { from: self.service, to: Service::Processor(network), intent: msg.intent() };
let msg = borsh::to_vec(&msg).unwrap();
self.queue(metadata, msg).await;
}
async fn recv(&self, network: NetworkId) -> Message {
async fn recv(&self, network: ExternalNetworkId) -> Message {
let msg = self.next(Service::Processor(network)).await;
assert_eq!(msg.from, Service::Processor(network));

View File

@@ -14,14 +14,15 @@
use zeroize::Zeroizing;
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
SeraiError, Serai,
primitives::NetworkId,
validator_sets::primitives::{Session, ValidatorSet},
primitives::ExternalNetworkId,
validator_sets::primitives::{ExternalValidatorSet, Session},
Serai, SeraiError,
};
use serai_db::*;
@@ -70,13 +71,18 @@ impl LatestCosignedBlock {
db_channel! {
SubstrateDbChannels {
CosignTransactions: (network: NetworkId) -> (Session, u64, [u8; 32]),
CosignTransactions: (network: ExternalNetworkId) -> (Session, u64, [u8; 32]),
}
}
impl CosignTransactions {
// Append a cosign transaction.
pub fn append_cosign(txn: &mut impl DbTxn, set: ValidatorSet, number: u64, hash: [u8; 32]) {
pub fn append_cosign(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
number: u64,
hash: [u8; 32],
) {
CosignTransactions::send(txn, set.network, &(set.session, number, hash))
}
}
@@ -256,22 +262,22 @@ async fn advance_cosign_protocol_inner(
// Using the keys of the prior block ensures this deadlock isn't reached
let serai = serai.as_of(actual_block.header.parent_hash.into());
for network in serai_client::primitives::NETWORKS {
for network in serai_client::primitives::EXTERNAL_NETWORKS {
// Get the latest session to have set keys
let set_with_keys = {
let Some(latest_session) = serai.validator_sets().session(network).await? else {
let Some(latest_session) = serai.validator_sets().session(network.into()).await? else {
continue;
};
let prior_session = Session(latest_session.0.saturating_sub(1));
if serai
.validator_sets()
.keys(ValidatorSet { network, session: prior_session })
.keys(ExternalValidatorSet { network, session: prior_session })
.await?
.is_some()
{
ValidatorSet { network, session: prior_session }
ExternalValidatorSet { network, session: prior_session }
} else {
let set = ValidatorSet { network, session: latest_session };
let set = ExternalValidatorSet { network, session: latest_session };
if serai.validator_sets().keys(set).await?.is_none() {
continue;
}
@@ -280,7 +286,7 @@ async fn advance_cosign_protocol_inner(
};
log::debug!("{:?} will be cosigning {block}", set_with_keys.network);
cosigning.push((set_with_keys, in_set(key, &serai, set_with_keys).await?.unwrap()));
cosigning.push((set_with_keys, in_set(key, &serai, set_with_keys.into()).await?.unwrap()));
}
break;

View File

@@ -1,4 +1,4 @@
use serai_client::primitives::NetworkId;
use serai_client::primitives::ExternalNetworkId;
pub use serai_db::*;
@@ -9,7 +9,7 @@ mod inner_db {
SubstrateDb {
NextBlock: () -> u64,
HandledEvent: (block: [u8; 32]) -> u32,
BatchInstructionsHashDb: (network: NetworkId, id: u32) -> [u8; 32]
BatchInstructionsHashDb: (network: ExternalNetworkId, id: u32) -> [u8; 32]
}
);
}

View File

@@ -6,14 +6,18 @@ use std::{
use zeroize::Zeroizing;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::{
SeraiError, Block, Serai, TemporalSerai,
primitives::{BlockHash, EmbeddedEllipticCurve, NetworkId},
validator_sets::{primitives::ValidatorSet, ValidatorSetsEvent},
in_instructions::InInstructionsEvent,
coins::CoinsEvent,
in_instructions::InInstructionsEvent,
primitives::{BlockHash, ExternalNetworkId},
validator_sets::{
primitives::{ExternalValidatorSet, ValidatorSet},
ValidatorSetsEvent,
},
Block, Serai, SeraiError, TemporalSerai,
};
use serai_db::DbTxn;
@@ -52,54 +56,21 @@ async fn handle_new_set<D: Db>(
new_tributary_spec: &mpsc::UnboundedSender<TributarySpec>,
serai: &Serai,
block: &Block,
set: ValidatorSet,
set: ExternalValidatorSet,
) -> Result<(), SeraiError> {
if in_set(key, &serai.as_of(block.hash()), set)
if in_set(key, &serai.as_of(block.hash()), set.into())
.await?
.expect("NewSet for set which doesn't exist")
{
log::info!("present in set {:?}", set);
let validators;
let mut evrf_public_keys = vec![];
{
let set_data = {
let serai = serai.as_of(block.hash());
let serai = serai.validator_sets();
let set_participants =
serai.participants(set.network).await?.expect("NewSet for set which doesn't exist");
serai.participants(set.network.into()).await?.expect("NewSet for set which doesn't exist");
validators = set_participants
.iter()
.map(|(k, w)| {
(
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut k.0.as_ref())
.expect("invalid key registered as participant"),
u16::try_from(*w).unwrap(),
)
})
.collect::<Vec<_>>();
for (validator, _) in set_participants {
// This is only run for external networks which always do a DKG for Serai
let substrate = serai
.embedded_elliptic_curve_key(validator, EmbeddedEllipticCurve::Embedwards25519)
.await?
.expect("Serai called NewSet on a validator without an Embedwards25519 key");
// `embedded_elliptic_curves` is documented to have the second entry be the
// network-specific curve (if it exists and is distinct from Embedwards25519)
let network =
if let Some(embedded_elliptic_curve) = set.network.embedded_elliptic_curves().get(1) {
serai.embedded_elliptic_curve_key(validator, *embedded_elliptic_curve).await?.expect(
"Serai called NewSet on a validator without the embedded key required for the network",
)
} else {
substrate.clone()
};
evrf_public_keys.push((
<[u8; 32]>::try_from(substrate)
.expect("validator-sets pallet accepted a key of an invalid length"),
network,
));
}
set_participants.into_iter().map(|(k, w)| (k, u16::try_from(w).unwrap())).collect::<Vec<_>>()
};
let time = if let Ok(time) = block.time() {
@@ -123,7 +94,7 @@ async fn handle_new_set<D: Db>(
const SUBSTRATE_TO_TRIBUTARY_TIME_DELAY: u64 = 120;
let time = time + SUBSTRATE_TO_TRIBUTARY_TIME_DELAY;
let spec = TributarySpec::new(block.hash(), time, set, validators, evrf_public_keys);
let spec = TributarySpec::new(block.hash(), time, set, set_data);
log::info!("creating new tributary for {:?}", spec.set());
@@ -164,7 +135,7 @@ async fn handle_batch_and_burns<Pro: Processors>(
};
let mut batch_block = HashMap::new();
let mut batches = HashMap::<NetworkId, Vec<u32>>::new();
let mut batches = HashMap::<ExternalNetworkId, Vec<u32>>::new();
let mut burns = HashMap::new();
let serai = serai.as_of(block.hash());
@@ -238,8 +209,8 @@ async fn handle_block<D: Db, Pro: Processors>(
db: &mut D,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
new_tributary_spec: &mpsc::UnboundedSender<TributarySpec>,
perform_slash_report: &mpsc::UnboundedSender<ValidatorSet>,
tributary_retired: &mpsc::UnboundedSender<ValidatorSet>,
perform_slash_report: &mpsc::UnboundedSender<ExternalValidatorSet>,
tributary_retired: &mpsc::UnboundedSender<ExternalValidatorSet>,
processors: &Pro,
serai: &Serai,
block: Block,
@@ -259,12 +230,8 @@ async fn handle_block<D: Db, Pro: Processors>(
panic!("NewSet event wasn't NewSet: {new_set:?}");
};
// If this is Serai, do nothing
// We only coordinate/process external networks
if set.network == NetworkId::Serai {
continue;
}
let Ok(set) = ExternalValidatorSet::try_from(set) else { continue };
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh new set event {:?}", new_set);
let mut txn = db.txn();
@@ -319,10 +286,7 @@ async fn handle_block<D: Db, Pro: Processors>(
panic!("AcceptedHandover event wasn't AcceptedHandover: {accepted_handover:?}");
};
if set.network == NetworkId::Serai {
continue;
}
let Ok(set) = ExternalValidatorSet::try_from(set) else { continue };
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh accepted handover event {:?}", accepted_handover);
// TODO: This isn't atomic with the event handling
@@ -340,10 +304,7 @@ async fn handle_block<D: Db, Pro: Processors>(
panic!("SetRetired event wasn't SetRetired: {retired_set:?}");
};
if set.network == NetworkId::Serai {
continue;
}
let Ok(set) = ExternalValidatorSet::try_from(set) else { continue };
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh set retired event {:?}", retired_set);
let mut txn = db.txn();
@@ -373,8 +334,8 @@ async fn handle_new_blocks<D: Db, Pro: Processors>(
db: &mut D,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
new_tributary_spec: &mpsc::UnboundedSender<TributarySpec>,
perform_slash_report: &mpsc::UnboundedSender<ValidatorSet>,
tributary_retired: &mpsc::UnboundedSender<ValidatorSet>,
perform_slash_report: &mpsc::UnboundedSender<ExternalValidatorSet>,
tributary_retired: &mpsc::UnboundedSender<ExternalValidatorSet>,
processors: &Pro,
serai: &Serai,
next_block: &mut u64,
@@ -428,8 +389,8 @@ pub async fn scan_task<D: Db, Pro: Processors>(
processors: Pro,
serai: Arc<Serai>,
new_tributary_spec: mpsc::UnboundedSender<TributarySpec>,
perform_slash_report: mpsc::UnboundedSender<ValidatorSet>,
tributary_retired: mpsc::UnboundedSender<ValidatorSet>,
perform_slash_report: mpsc::UnboundedSender<ExternalValidatorSet>,
tributary_retired: mpsc::UnboundedSender<ExternalValidatorSet>,
) {
log::info!("scanning substrate");
let mut next_substrate_block = NextBlock::get(&db).unwrap_or_default();
@@ -527,9 +488,12 @@ pub async fn scan_task<D: Db, Pro: Processors>(
/// retry.
pub(crate) async fn expected_next_batch(
serai: &Serai,
network: NetworkId,
network: ExternalNetworkId,
) -> Result<u32, SeraiError> {
async fn expected_next_batch_inner(serai: &Serai, network: NetworkId) -> Result<u32, SeraiError> {
async fn expected_next_batch_inner(
serai: &Serai,
network: ExternalNetworkId,
) -> Result<u32, SeraiError> {
let serai = serai.as_of_latest_finalized_block().await?;
let last = serai.in_instructions().last_batch_for_network(network).await?;
Ok(if let Some(last) = last { last + 1 } else { 0 })
@@ -552,7 +516,7 @@ pub(crate) async fn expected_next_batch(
/// This is deemed fine.
pub(crate) async fn verify_published_batches<D: Db>(
txn: &mut D::Transaction<'_>,
network: NetworkId,
network: ExternalNetworkId,
optimistic_up_to: u32,
) -> Option<u32> {
// TODO: Localize from MainDb to SubstrateDb

View File

@@ -4,7 +4,7 @@ use std::{
collections::{VecDeque, HashSet, HashMap},
};
use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet};
use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet};
use processor_messages::CoordinatorMessage;
@@ -20,7 +20,7 @@ use crate::{
pub mod tributary;
#[derive(Clone)]
pub struct MemProcessors(pub Arc<RwLock<HashMap<NetworkId, VecDeque<CoordinatorMessage>>>>);
pub struct MemProcessors(pub Arc<RwLock<HashMap<ExternalNetworkId, VecDeque<CoordinatorMessage>>>>);
impl MemProcessors {
#[allow(clippy::new_without_default)]
pub fn new() -> MemProcessors {
@@ -30,12 +30,12 @@ impl MemProcessors {
#[async_trait::async_trait]
impl Processors for MemProcessors {
async fn send(&self, network: NetworkId, msg: impl Send + Into<CoordinatorMessage>) {
async fn send(&self, network: ExternalNetworkId, msg: impl Send + Into<CoordinatorMessage>) {
let mut processors = self.0.write().await;
let processor = processors.entry(network).or_insert_with(VecDeque::new);
processor.push_back(msg.into());
}
async fn recv(&self, _: NetworkId) -> Message {
async fn recv(&self, _: ExternalNetworkId) -> Message {
todo!()
}
async fn ack(&self, _: Message) {
@@ -65,8 +65,8 @@ impl LocalP2p {
impl P2p for LocalP2p {
type Id = usize;
async fn subscribe(&self, _set: ValidatorSet, _genesis: [u8; 32]) {}
async fn unsubscribe(&self, _set: ValidatorSet, _genesis: [u8; 32]) {}
async fn subscribe(&self, _set: ExternalValidatorSet, _genesis: [u8; 32]) {}
async fn unsubscribe(&self, _set: ExternalValidatorSet, _genesis: [u8; 32]) {}
async fn send_raw(&self, to: Self::Id, msg: Vec<u8>) {
let mut msg_ref = msg.as_slice();

View File

@@ -7,12 +7,17 @@ use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, OsRng};
use futures_util::{task::Poll, poll};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, GroupEncoding},
Ciphersuite,
};
use sp_application_crypto::sr25519;
use borsh::BorshDeserialize;
use serai_client::{
primitives::NetworkId,
validator_sets::primitives::{Session, ValidatorSet},
primitives::ExternalNetworkId,
validator_sets::primitives::{ExternalValidatorSet, Session},
};
use tokio::time::sleep;
@@ -46,24 +51,16 @@ pub fn new_spec<R: RngCore + CryptoRng>(
let start_time = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs();
let set = ValidatorSet { session: Session(0), network: NetworkId::Bitcoin };
let set = ExternalValidatorSet { session: Session(0), network: ExternalNetworkId::Bitcoin };
let validators = keys
let set_participants = keys
.iter()
.map(|key| ((<Ristretto as Ciphersuite>::generator() * **key), 1))
.map(|key| {
(sr25519::Public::from((<Ristretto as Ciphersuite>::generator() * **key).to_bytes()), 1)
})
.collect::<Vec<_>>();
// Generate random eVRF keys as none of these test rely on them to have any structure
let mut evrf_keys = vec![];
for _ in 0 .. keys.len() {
let mut substrate = [0; 32];
OsRng.fill_bytes(&mut substrate);
let mut network = vec![0; 64];
OsRng.fill_bytes(&mut network);
evrf_keys.push((substrate, network));
}
let res = TributarySpec::new(serai_block, start_time, set, validators, evrf_keys);
let res = TributarySpec::new(serai_block, start_time, set, set_participants);
assert_eq!(
TributarySpec::deserialize_reader(&mut borsh::to_vec(&res).unwrap().as_slice()).unwrap(),
res,

View File

@@ -1,22 +1,27 @@
use core::time::Duration;
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::Participant;
use sp_runtime::traits::Verify;
use serai_client::{
primitives::Signature,
validator_sets::primitives::{ValidatorSet, KeyPair},
primitives::{SeraiAddress, Signature},
validator_sets::primitives::{ExternalValidatorSet, KeyPair},
};
use tokio::time::sleep;
use serai_db::{Get, DbTxn, Db, MemDb};
use processor_messages::{key_gen, CoordinatorMessage};
use processor_messages::{
key_gen::{self, KeyGenId},
CoordinatorMessage,
};
use tributary::{TransactionTrait, Tributary};
@@ -50,41 +55,44 @@ async fn dkg_test() {
tokio::spawn(run_tributaries(tributaries.clone()));
let mut txs = vec![];
// Create DKG participation for each key
// Create DKG commitments for each key
for key in &keys {
let mut participation = vec![0; 4096];
OsRng.fill_bytes(&mut participation);
let attempt = 0;
let mut commitments = vec![0; 256];
OsRng.fill_bytes(&mut commitments);
let mut tx =
Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() };
let mut tx = Transaction::DkgCommitments {
attempt,
commitments: vec![commitments],
signed: Transaction::empty_signed(),
};
tx.sign(&mut OsRng, spec.genesis(), key);
txs.push(tx);
}
let block_before_tx = tributaries[0].1.tip().await;
// Publish t-1 participations
let t = ((keys.len() * 2) / 3) + 1;
for (i, tx) in txs.iter().take(t - 1).enumerate() {
// Publish all commitments but one
for (i, tx) in txs.iter().enumerate().skip(1) {
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
}
// Wait until these are included
for tx in txs.iter().skip(1) {
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
let expected_participations = txs
let expected_commitments: HashMap<_, _> = txs
.iter()
.enumerate()
.map(|(i, tx)| {
if let Transaction::DkgParticipation { participation, .. } = tx {
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Participation {
session: spec.set().session,
participant: Participant::new((i + 1).try_into().unwrap()).unwrap(),
participation: participation.clone(),
})
if let Transaction::DkgCommitments { commitments, .. } = tx {
(Participant::new((i + 1).try_into().unwrap()).unwrap(), commitments[0].clone())
} else {
panic!("txs wasn't a DkgParticipation");
panic!("txs had non-commitments");
}
})
.collect::<Vec<_>>();
.collect();
async fn new_processors(
db: &mut MemDb,
@@ -113,30 +121,28 @@ async fn dkg_test() {
processors
}
// Instantiate a scanner and verify it has the first two participations to report (and isn't
// waiting for `t`)
// Instantiate a scanner and verify it has nothing to report
let processors = new_processors(&mut dbs[0], &keys[0], &spec, &tributaries[0].1).await;
assert_eq!(processors.0.read().await.get(&spec.set().network).unwrap().len(), t - 1);
assert!(processors.0.read().await.is_empty());
// Publish the rest of the participations
// Publish the last commitment
let block_before_tx = tributaries[0].1.tip().await;
for tx in txs.iter().skip(t - 1) {
assert_eq!(tributaries[0].1.add_transaction(tx.clone()).await, Ok(true));
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true));
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await;
sleep(Duration::from_secs(Tributary::<MemDb, Transaction, LocalP2p>::block_time().into())).await;
// Verify the scanner emits all KeyGen::Participations messages
// Verify the scanner emits a KeyGen::Commitments message
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[0],
&keys[0],
&|_, _, _, _| async {
panic!("provided TX caused recognized_id to be called after DkgParticipation")
panic!("provided TX caused recognized_id to be called after Commitments")
},
&processors,
&(),
&|_| async {
panic!(
"test tried to publish a new Tributary TX from handle_application_tx after DkgParticipation"
"test tried to publish a new Tributary TX from handle_application_tx after Commitments"
)
},
&spec,
@@ -145,11 +151,17 @@ async fn dkg_test() {
.await;
{
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
assert_eq!(msgs.len(), keys.len());
for expected in &expected_participations {
assert_eq!(&msgs.pop_front().unwrap(), expected);
}
let mut expected_commitments = expected_commitments.clone();
expected_commitments.remove(&Participant::new((1).try_into().unwrap()).unwrap());
assert_eq!(
msgs.pop_front().unwrap(),
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: spec.set().session, attempt: 0 },
commitments: expected_commitments
})
);
assert!(msgs.is_empty());
}
@@ -157,31 +169,38 @@ async fn dkg_test() {
for (i, key) in keys.iter().enumerate().skip(1) {
let processors = new_processors(&mut dbs[i], key, &spec, &tributaries[i].1).await;
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
assert_eq!(msgs.len(), keys.len());
for expected in &expected_participations {
assert_eq!(&msgs.pop_front().unwrap(), expected);
}
let mut expected_commitments = expected_commitments.clone();
expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap());
assert_eq!(
msgs.pop_front().unwrap(),
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: spec.set().session, attempt: 0 },
commitments: expected_commitments
})
);
assert!(msgs.is_empty());
}
let mut substrate_key = [0; 32];
OsRng.fill_bytes(&mut substrate_key);
let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()];
OsRng.fill_bytes(&mut network_key);
let key_pair = KeyPair(serai_client::Public(substrate_key), network_key.try_into().unwrap());
// Now do shares
let mut txs = vec![];
for (i, key) in keys.iter().enumerate() {
let mut txn = dbs[i].txn();
// Claim we've generated the key pair
crate::tributary::generated_key_pair::<MemDb>(&mut txn, spec.genesis(), &key_pair);
// Publish the nonces
for (k, key) in keys.iter().enumerate() {
let attempt = 0;
let mut tx = Transaction::DkgConfirmationNonces {
let mut shares = vec![vec![]];
for i in 0 .. keys.len() {
if i != k {
let mut share = vec![0; 256];
OsRng.fill_bytes(&mut share);
shares.last_mut().unwrap().push(share);
}
}
let mut txn = dbs[k].txn();
let mut tx = Transaction::DkgShares {
attempt,
shares,
confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0),
signed: Transaction::empty_signed(),
};
@@ -189,6 +208,133 @@ async fn dkg_test() {
tx.sign(&mut OsRng, spec.genesis(), key);
txs.push(tx);
}
let block_before_tx = tributaries[0].1.tip().await;
for (i, tx) in txs.iter().enumerate().skip(1) {
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
}
for tx in txs.iter().skip(1) {
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
// With just 4 sets of shares, nothing should happen yet
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[0],
&keys[0],
&|_, _, _, _| async {
panic!("provided TX caused recognized_id to be called after some shares")
},
&processors,
&(),
&|_| async {
panic!(
"test tried to publish a new Tributary TX from handle_application_tx after some shares"
)
},
&spec,
&tributaries[0].1.reader(),
)
.await;
assert_eq!(processors.0.read().await.len(), 1);
assert!(processors.0.read().await[&spec.set().network].is_empty());
// Publish the final set of shares
let block_before_tx = tributaries[0].1.tip().await;
assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true));
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await;
sleep(Duration::from_secs(Tributary::<MemDb, Transaction, LocalP2p>::block_time().into())).await;
// Each scanner should emit a distinct shares message
let shares_for = |i: usize| {
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Shares {
id: KeyGenId { session: spec.set().session, attempt: 0 },
shares: vec![txs
.iter()
.enumerate()
.filter_map(|(l, tx)| {
if let Transaction::DkgShares { shares, .. } = tx {
if i == l {
None
} else {
let relative_i = i - (if i > l { 1 } else { 0 });
Some((
Participant::new((l + 1).try_into().unwrap()).unwrap(),
shares[0][relative_i].clone(),
))
}
} else {
panic!("txs had non-shares");
}
})
.collect::<HashMap<_, _>>()],
})
};
// Any scanner which has handled the prior blocks should only emit the new event
for (i, key) in keys.iter().enumerate() {
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[i],
key,
&|_, _, _, _| async { panic!("provided TX caused recognized_id to be called after shares") },
&processors,
&(),
&|_| async { panic!("test tried to publish a new Tributary TX from handle_application_tx") },
&spec,
&tributaries[i].1.reader(),
)
.await;
{
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
assert_eq!(msgs.pop_front().unwrap(), shares_for(i));
assert!(msgs.is_empty());
}
}
// Yet new scanners should emit all events
for (i, key) in keys.iter().enumerate() {
let processors = new_processors(&mut MemDb::new(), key, &spec, &tributaries[i].1).await;
let mut msgs = processors.0.write().await;
assert_eq!(msgs.len(), 1);
let msgs = msgs.get_mut(&spec.set().network).unwrap();
let mut expected_commitments = expected_commitments.clone();
expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap());
assert_eq!(
msgs.pop_front().unwrap(),
CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: spec.set().session, attempt: 0 },
commitments: expected_commitments
})
);
assert_eq!(msgs.pop_front().unwrap(), shares_for(i));
assert!(msgs.is_empty());
}
// Send DkgConfirmed
let mut substrate_key = [0; 32];
OsRng.fill_bytes(&mut substrate_key);
let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()];
OsRng.fill_bytes(&mut network_key);
let key_pair =
KeyPair(serai_client::Public::from(substrate_key), network_key.try_into().unwrap());
let mut txs = vec![];
for (i, key) in keys.iter().enumerate() {
let attempt = 0;
let mut txn = dbs[i].txn();
let share =
crate::tributary::generated_key_pair::<MemDb>(&mut txn, key, &spec, &key_pair, 0).unwrap();
txn.commit();
let mut tx = Transaction::DkgConfirmed {
attempt,
confirmation_share: share,
signed: Transaction::empty_signed(),
};
tx.sign(&mut OsRng, spec.genesis(), key);
txs.push(tx);
}
let block_before_tx = tributaries[0].1.tip().await;
for (i, tx) in txs.iter().enumerate() {
assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true));
@@ -197,35 +343,6 @@ async fn dkg_test() {
wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await;
}
// This should not cause any new processor event as the processor doesn't handle DKG confirming
for (i, key) in keys.iter().enumerate() {
handle_new_blocks::<_, _, _, _, _, LocalP2p>(
&mut dbs[i],
key,
&|_, _, _, _| async {
panic!("provided TX caused recognized_id to be called after DkgConfirmationNonces")
},
&processors,
&(),
// The Tributary handler should publish ConfirmationShare itself after ConfirmationNonces
&|tx| async { assert_eq!(tributaries[i].1.add_transaction(tx).await, Ok(true)) },
&spec,
&tributaries[i].1.reader(),
)
.await;
{
assert!(processors.0.read().await.get(&spec.set().network).unwrap().is_empty());
}
}
// Yet once these TXs are on-chain, the tributary should itself publish the confirmation shares
// This means in the block after the next block, the keys should be set onto Serai
// Sleep twice as long as two blocks, in case there's some stability issue
sleep(Duration::from_secs(
2 * 2 * u64::from(Tributary::<MemDb, Transaction, LocalP2p>::block_time()),
))
.await;
struct CheckPublishSetKeys {
spec: TributarySpec,
key_pair: KeyPair,
@@ -235,25 +352,20 @@ async fn dkg_test() {
async fn publish_set_keys(
&self,
_db: &(impl Sync + Get),
set: ValidatorSet,
set: ExternalValidatorSet,
removed: Vec<SeraiAddress>,
key_pair: KeyPair,
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
signature: Signature,
) {
assert_eq!(set, self.spec.set());
assert!(removed.is_empty());
assert_eq!(self.key_pair, key_pair);
assert!(signature.verify(
&*serai_client::validator_sets::primitives::set_keys_message(&set, &key_pair),
&serai_client::Public(
frost::dkg::musig::musig_key::<Ristretto>(
&serai_client::validator_sets::primitives::musig_context(set),
&self
.spec
.validators()
.into_iter()
.zip(signature_participants)
.filter_map(|((validator, _), included)| included.then_some(validator))
.collect::<Vec<_>>()
&*serai_client::validator_sets::primitives::set_keys_message(&set, &[], &key_pair),
&serai_client::Public::from(
dkg_musig::musig_key_vartime::<Ristretto>(
serai_client::validator_sets::primitives::musig_context(set.into()),
&self.spec.validators().into_iter().map(|(validator, _)| validator).collect::<Vec<_>>()
)
.unwrap()
.to_bytes()

View File

@@ -2,12 +2,13 @@ use core::fmt::Debug;
use rand_core::{RngCore, OsRng};
use ciphersuite::{group::Group, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::Group, Ciphersuite};
use scale::{Encode, Decode};
use serai_client::{
primitives::Signature,
validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet, KeyPair},
primitives::{SeraiAddress, Signature},
validator_sets::primitives::{ExternalValidatorSet, KeyPair, MAX_KEY_SHARES_PER_SET},
};
use processor_messages::coordinator::SubstrateSignableId;
@@ -31,9 +32,9 @@ impl PublishSeraiTransaction for () {
async fn publish_set_keys(
&self,
_db: &(impl Sync + serai_db::Get),
_set: ValidatorSet,
_set: ExternalValidatorSet,
_removed: Vec<SeraiAddress>,
_key_pair: KeyPair,
_signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
_signature: Signature,
) {
panic!("publish_set_keys was called in test")
@@ -143,34 +144,84 @@ fn serialize_sign_data() {
#[test]
fn serialize_transaction() {
test_read_write(&Transaction::RemoveParticipant {
test_read_write(&Transaction::RemoveParticipantDueToDkg {
participant: <Ristretto as Ciphersuite>::G::random(&mut OsRng),
signed: random_signed_with_nonce(&mut OsRng, 0),
});
test_read_write(&Transaction::DkgParticipation {
participation: random_vec(&mut OsRng, 4096),
signed: random_signed_with_nonce(&mut OsRng, 0),
});
{
let mut commitments = vec![random_vec(&mut OsRng, 512)];
for _ in 0 .. (OsRng.next_u64() % 100) {
let mut temp = commitments[0].clone();
OsRng.fill_bytes(&mut temp);
commitments.push(temp);
}
test_read_write(&Transaction::DkgCommitments {
attempt: random_u32(&mut OsRng),
commitments,
signed: random_signed_with_nonce(&mut OsRng, 0),
});
}
test_read_write(&Transaction::DkgConfirmationNonces {
attempt: random_u32(&mut OsRng),
confirmation_nonces: {
let mut nonces = [0; 64];
OsRng.fill_bytes(&mut nonces);
nonces
},
signed: random_signed_with_nonce(&mut OsRng, 0),
});
{
// This supports a variable share length, and variable amount of sent shares, yet share length
// and sent shares is expected to be constant among recipients
let share_len = usize::try_from((OsRng.next_u64() % 512) + 1).unwrap();
let amount_of_shares = usize::try_from((OsRng.next_u64() % 3) + 1).unwrap();
// Create a valid vec of shares
let mut shares = vec![];
// Create up to 150 participants
for _ in 0 ..= (OsRng.next_u64() % 150) {
// Give each sender multiple shares
let mut sender_shares = vec![];
for _ in 0 .. amount_of_shares {
let mut share = vec![0; share_len];
OsRng.fill_bytes(&mut share);
sender_shares.push(share);
}
shares.push(sender_shares);
}
test_read_write(&Transaction::DkgConfirmationShare {
test_read_write(&Transaction::DkgShares {
attempt: random_u32(&mut OsRng),
shares,
confirmation_nonces: {
let mut nonces = [0; 64];
OsRng.fill_bytes(&mut nonces);
nonces
},
signed: random_signed_with_nonce(&mut OsRng, 1),
});
}
for i in 0 .. 2 {
test_read_write(&Transaction::InvalidDkgShare {
attempt: random_u32(&mut OsRng),
accuser: frost::Participant::new(
u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1),
)
.unwrap(),
faulty: frost::Participant::new(
u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1),
)
.unwrap(),
blame: if i == 0 {
None
} else {
Some(random_vec(&mut OsRng, 500)).filter(|blame| !blame.is_empty())
},
signed: random_signed_with_nonce(&mut OsRng, 2),
});
}
test_read_write(&Transaction::DkgConfirmed {
attempt: random_u32(&mut OsRng),
confirmation_share: {
let mut share = [0; 32];
OsRng.fill_bytes(&mut share);
share
},
signed: random_signed_with_nonce(&mut OsRng, 1),
signed: random_signed_with_nonce(&mut OsRng, 2),
});
{

View File

@@ -3,7 +3,8 @@ use std::{sync::Arc, collections::HashSet};
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use tokio::{
sync::{mpsc, broadcast},
@@ -29,7 +30,7 @@ async fn sync_test() {
let mut keys = new_keys(&mut OsRng);
let spec = new_spec(&mut OsRng, &keys);
// Ensure this can have a node fail
assert!(spec.n() > spec.t());
assert!(spec.n(&[]) > spec.t());
let mut tributaries = new_tributaries(&keys, &spec)
.await
@@ -142,7 +143,7 @@ async fn sync_test() {
// Because only `t` validators are used in a commit, take n - t nodes offline
// leaving only `t` nodes. Which should force it to participate in the consensus
// of next blocks.
let spares = usize::from(spec.n() - spec.t());
let spares = usize::from(spec.n(&[]) - spec.t());
for thread in p2p_threads.iter().take(spares) {
thread.abort();
}

View File

@@ -37,14 +37,15 @@ async fn tx_test() {
usize::try_from(OsRng.next_u64() % u64::try_from(tributaries.len()).unwrap()).unwrap();
let key = keys[sender].clone();
let block_before_tx = tributaries[sender].1.tip().await;
let attempt = 0;
let mut commitments = vec![0; 256];
OsRng.fill_bytes(&mut commitments);
// Create the TX with a null signature so we can get its sig hash
let mut tx = Transaction::DkgParticipation {
participation: {
let mut participation = vec![0; 4096];
OsRng.fill_bytes(&mut participation);
participation
},
let block_before_tx = tributaries[sender].1.tip().await;
let mut tx = Transaction::DkgCommitments {
attempt,
commitments: vec![commitments.clone()],
signed: Transaction::empty_signed(),
};
tx.sign(&mut OsRng, spec.genesis(), &key);

View File

@@ -3,10 +3,11 @@ use std::collections::HashMap;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::Participant;
use serai_client::validator_sets::primitives::{KeyPair, ValidatorSet};
use serai_client::validator_sets::primitives::{KeyPair, ExternalValidatorSet};
use processor_messages::coordinator::SubstrateSignableId;
@@ -18,6 +19,7 @@ use crate::tributary::{Label, Transaction};
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
pub enum Topic {
Dkg,
DkgConfirmation,
SubstrateSign(SubstrateSignableId),
Sign([u8; 32]),
@@ -45,13 +47,15 @@ pub enum Accumulation {
create_db!(
Tributary {
SeraiBlockNumber: (hash: [u8; 32]) -> u64,
SeraiDkgCompleted: (set: ValidatorSet) -> [u8; 32],
SeraiDkgCompleted: (spec: ExternalValidatorSet) -> [u8; 32],
TributaryBlockNumber: (block: [u8; 32]) -> u32,
LastHandledBlock: (genesis: [u8; 32]) -> [u8; 32],
// TODO: Revisit the point of this
FatalSlashes: (genesis: [u8; 32]) -> Vec<[u8; 32]>,
RemovedAsOfDkgAttempt: (genesis: [u8; 32], attempt: u32) -> Vec<[u8; 32]>,
OfflineDuringDkg: (genesis: [u8; 32]) -> Vec<[u8; 32]>,
// TODO: Combine these two
FatallySlashed: (genesis: [u8; 32], account: [u8; 32]) -> (),
SlashPoints: (genesis: [u8; 32], account: [u8; 32]) -> u32,
@@ -64,9 +68,11 @@ create_db!(
DataReceived: (genesis: [u8; 32], data_spec: &DataSpecification) -> u16,
DataDb: (genesis: [u8; 32], data_spec: &DataSpecification, signer_bytes: &[u8; 32]) -> Vec<u8>,
DkgParticipation: (genesis: [u8; 32], from: u16) -> Vec<u8>,
DkgShare: (genesis: [u8; 32], from: u16, to: u16) -> Vec<u8>,
ConfirmationNonces: (genesis: [u8; 32], attempt: u32) -> HashMap<Participant, Vec<u8>>,
DkgKeyPair: (genesis: [u8; 32]) -> KeyPair,
DkgKeyPair: (genesis: [u8; 32], attempt: u32) -> KeyPair,
KeyToDkgAttempt: (key: [u8; 32]) -> u32,
DkgLocallyCompleted: (genesis: [u8; 32]) -> (),
PlanIds: (genesis: &[u8], block: u64) -> Vec<[u8; 32]>,
@@ -75,7 +81,7 @@ create_db!(
SlashReports: (genesis: [u8; 32], signer: [u8; 32]) -> Vec<u32>,
SlashReported: (genesis: [u8; 32]) -> u16,
SlashReportCutOff: (genesis: [u8; 32]) -> u64,
SlashReport: (set: ValidatorSet) -> Vec<([u8; 32], u32)>,
SlashReport: (set: ExternalValidatorSet) -> Vec<([u8; 32], u32)>,
}
);
@@ -118,12 +124,12 @@ impl AttemptDb {
pub fn attempt(getter: &impl Get, genesis: [u8; 32], topic: Topic) -> Option<u32> {
let attempt = Self::get(getter, genesis, &topic);
// Don't require explicit recognition of the DkgConfirmation topic as it starts when the chain
// does
// Don't require explicit recognition of the Dkg topic as it starts when the chain does
// Don't require explicit recognition of the SlashReport topic as it isn't a DoS risk and it
// should always happen (eventually)
if attempt.is_none() &&
((topic == Topic::DkgConfirmation) ||
((topic == Topic::Dkg) ||
(topic == Topic::DkgConfirmation) ||
(topic == Topic::SubstrateSign(SubstrateSignableId::SlashReport)))
{
return Some(0);
@@ -150,12 +156,16 @@ impl ReattemptDb {
// 5 minutes for attempts 0 ..= 2, 10 minutes for attempts 3 ..= 5, 15 minutes for attempts > 5
// Assumes no event will take longer than 15 minutes, yet grows the time in case there are
// network bandwidth issues
let reattempt_delay = BASE_REATTEMPT_DELAY *
let mut reattempt_delay = BASE_REATTEMPT_DELAY *
((AttemptDb::attempt(txn, genesis, topic)
.expect("scheduling re-attempt for unknown topic") /
3) +
1)
.min(3);
// Allow more time for DKGs since they have an extra round and much more data
if matches!(topic, Topic::Dkg) {
reattempt_delay *= 4;
}
let upon_block = current_block_number + reattempt_delay;
let mut reattempts = Self::get(txn, genesis, upon_block).unwrap_or(vec![]);

View File

@@ -4,16 +4,17 @@ use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::dkg::Participant;
use scale::{Encode, Decode};
use serai_client::{Signature, validator_sets::primitives::KeyPair};
use serai_client::validator_sets::primitives::KeyPair;
use tributary::{Signed, TransactionKind, TransactionTrait};
use processor_messages::{
key_gen::self,
key_gen::{self, KeyGenId},
coordinator::{self, SubstrateSignableId, SubstrateSignId},
sign::{self, SignId},
};
@@ -38,20 +39,33 @@ pub fn dkg_confirmation_nonces(
txn: &mut impl DbTxn,
attempt: u32,
) -> [u8; 64] {
DkgConfirmer::new(key, spec, txn, attempt).preprocess()
DkgConfirmer::new(key, spec, txn, attempt)
.expect("getting DKG confirmation nonces for unknown attempt")
.preprocess()
}
pub fn generated_key_pair<D: Db>(
txn: &mut D::Transaction<'_>,
genesis: [u8; 32],
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
spec: &TributarySpec,
key_pair: &KeyPair,
) {
DkgKeyPair::set(txn, genesis, key_pair);
attempt: u32,
) -> Result<[u8; 32], Participant> {
DkgKeyPair::set(txn, spec.genesis(), attempt, key_pair);
KeyToDkgAttempt::set(txn, key_pair.0 .0, &attempt);
let preprocesses = ConfirmationNonces::get(txn, spec.genesis(), attempt).unwrap();
DkgConfirmer::new(key, spec, txn, attempt)
.expect("claiming to have generated a key pair for an unrecognized attempt")
.share(preprocesses, key_pair)
}
fn unflatten(spec: &TributarySpec, data: &mut HashMap<Participant, Vec<u8>>) {
fn unflatten(
spec: &TributarySpec,
removed: &[<Ristretto as Ciphersuite>::G],
data: &mut HashMap<Participant, Vec<u8>>,
) {
for (validator, _) in spec.validators() {
let Some(range) = spec.i(validator) else { continue };
let Some(range) = spec.i(removed, validator) else { continue };
let Some(all_segments) = data.remove(&range.start) else {
continue;
};
@@ -75,6 +89,7 @@ impl<
{
fn accumulate(
&mut self,
removed: &[<Ristretto as Ciphersuite>::G],
data_spec: &DataSpecification,
signer: <Ristretto as Ciphersuite>::G,
data: &Vec<u8>,
@@ -85,7 +100,10 @@ impl<
panic!("accumulating data for a participant multiple times");
}
let signer_shares = {
let signer_i = self.spec.i(signer).expect("transaction signer wasn't a member of the set");
let Some(signer_i) = self.spec.i(removed, signer) else {
log::warn!("accumulating data from {} who was removed", hex::encode(signer.to_bytes()));
return Accumulation::NotReady;
};
u16::from(signer_i.end) - u16::from(signer_i.start)
};
@@ -98,7 +116,11 @@ impl<
// If 2/3rds of the network participated in this preprocess, queue it for an automatic
// re-attempt
if (data_spec.label == Label::Preprocess) && received_range.contains(&self.spec.t()) {
// DkgConfirmation doesn't have a re-attempt as it's just an extension for Dkg
if (data_spec.label == Label::Preprocess) &&
received_range.contains(&self.spec.t()) &&
(data_spec.topic != Topic::DkgConfirmation)
{
// Double check the attempt on this entry, as we don't want to schedule a re-attempt if this
// is an old entry
// This is an assert, not part of the if check, as old data shouldn't be here in the first
@@ -108,7 +130,10 @@ impl<
}
// If we have all the needed commitments/preprocesses/shares, tell the processor
if received_range.contains(&self.spec.t()) {
let needs_everyone =
(data_spec.topic == Topic::Dkg) || (data_spec.topic == Topic::DkgConfirmation);
let needed = if needs_everyone { self.spec.n(removed) } else { self.spec.t() };
if received_range.contains(&needed) {
log::debug!(
"accumulation for entry {:?} attempt #{} is ready",
&data_spec.topic,
@@ -117,7 +142,7 @@ impl<
let mut data = HashMap::new();
for validator in self.spec.validators().iter().map(|validator| validator.0) {
let Some(i) = self.spec.i(validator) else { continue };
let Some(i) = self.spec.i(removed, validator) else { continue };
data.insert(
i.start,
if let Some(data) = DataDb::get(self.txn, genesis, data_spec, &validator.to_bytes()) {
@@ -128,10 +153,10 @@ impl<
);
}
assert_eq!(data.len(), usize::from(self.spec.t()));
assert_eq!(data.len(), usize::from(needed));
// Remove our own piece of data, if we were involved
if let Some(i) = self.spec.i(Ristretto::generator() * self.our_key.deref()) {
if let Some(i) = self.spec.i(removed, Ristretto::generator() * self.our_key.deref()) {
if data.remove(&i.start).is_some() {
return Accumulation::Ready(DataSet::Participating(data));
}
@@ -143,6 +168,7 @@ impl<
fn handle_data(
&mut self,
removed: &[<Ristretto as Ciphersuite>::G],
data_spec: &DataSpecification,
bytes: &Vec<u8>,
signed: &Signed,
@@ -188,15 +214,21 @@ impl<
// TODO: If this is shares, we need to check they are part of the selected signing set
// Accumulate this data
self.accumulate(data_spec, signed.signer, bytes)
self.accumulate(removed, data_spec, signed.signer, bytes)
}
fn check_sign_data_len(
&mut self,
removed: &[<Ristretto as Ciphersuite>::G],
signer: <Ristretto as Ciphersuite>::G,
len: usize,
) -> Result<(), ()> {
let signer_i = self.spec.i(signer).expect("signer wasn't a member of the set");
let Some(signer_i) = self.spec.i(removed, signer) else {
// TODO: Ensure processor doesn't so participate/check how it handles removals for being
// offline
self.fatal_slash(signer.to_bytes(), "signer participated despite being removed");
Err(())?
};
if len != usize::from(u16::from(signer_i.end) - u16::from(signer_i.start)) {
self.fatal_slash(
signer.to_bytes(),
@@ -223,9 +255,12 @@ impl<
}
match tx {
Transaction::RemoveParticipant { participant, signed } => {
if self.spec.i(participant).is_none() {
self.fatal_slash(participant.to_bytes(), "RemoveParticipant vote for non-validator");
Transaction::RemoveParticipantDueToDkg { participant, signed } => {
if self.spec.i(&[], participant).is_none() {
self.fatal_slash(
participant.to_bytes(),
"RemoveParticipantDueToDkg vote for non-validator",
);
return;
}
@@ -240,106 +275,268 @@ impl<
let prior_votes = VotesToRemove::get(self.txn, genesis, participant).unwrap_or(0);
let signer_votes =
self.spec.i(signed.signer).expect("signer wasn't a validator for this network?");
self.spec.i(&[], signed.signer).expect("signer wasn't a validator for this network?");
let new_votes = prior_votes + u16::from(signer_votes.end) - u16::from(signer_votes.start);
VotesToRemove::set(self.txn, genesis, participant, &new_votes);
if ((prior_votes + 1) ..= new_votes).contains(&self.spec.t()) {
self.fatal_slash(participant, "RemoveParticipant vote")
self.fatal_slash(participant, "RemoveParticipantDueToDkg vote")
}
}
Transaction::DkgParticipation { participation, signed } => {
// Send the participation to the processor
Transaction::DkgCommitments { attempt, commitments, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self.fatal_slash(signed.signer.to_bytes(), "DkgCommitments with an unrecognized attempt");
return;
};
let Ok(()) = self.check_sign_data_len(&removed, signed.signer, commitments.len()) else {
return;
};
let data_spec = DataSpecification { topic: Topic::Dkg, label: Label::Preprocess, attempt };
match self.handle_data(&removed, &data_spec, &commitments.encode(), &signed) {
Accumulation::Ready(DataSet::Participating(mut commitments)) => {
log::info!("got all DkgCommitments for {}", hex::encode(genesis));
unflatten(self.spec, &removed, &mut commitments);
self
.processors
.send(
self.spec.set().network,
key_gen::CoordinatorMessage::Commitments {
id: KeyGenId { session: self.spec.set().session, attempt },
commitments,
},
)
.await;
}
Accumulation::Ready(DataSet::NotParticipating) => {
assert!(
removed.contains(&(Ristretto::generator() * self.our_key.deref())),
"NotParticipating in a DkgCommitments we weren't removed for"
);
}
Accumulation::NotReady => {}
}
}
Transaction::DkgShares { attempt, mut shares, confirmation_nonces, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self.fatal_slash(signed.signer.to_bytes(), "DkgShares with an unrecognized attempt");
return;
};
let not_participating = removed.contains(&(Ristretto::generator() * self.our_key.deref()));
let Ok(()) = self.check_sign_data_len(&removed, signed.signer, shares.len()) else {
return;
};
let Some(sender_i) = self.spec.i(&removed, signed.signer) else {
self.fatal_slash(
signed.signer.to_bytes(),
"DkgShares for a DKG they aren't participating in",
);
return;
};
let sender_is_len = u16::from(sender_i.end) - u16::from(sender_i.start);
for shares in &shares {
if shares.len() != (usize::from(self.spec.n(&removed) - sender_is_len)) {
self.fatal_slash(signed.signer.to_bytes(), "invalid amount of DKG shares");
return;
}
}
// Save each share as needed for blame
for (from_offset, shares) in shares.iter().enumerate() {
let from =
Participant::new(u16::from(sender_i.start) + u16::try_from(from_offset).unwrap())
.unwrap();
for (to_offset, share) in shares.iter().enumerate() {
// 0-indexed (the enumeration) to 1-indexed (Participant)
let mut to = u16::try_from(to_offset).unwrap() + 1;
// Adjust for the omission of the sender's own shares
if to >= u16::from(sender_i.start) {
to += u16::from(sender_i.end) - u16::from(sender_i.start);
}
let to = Participant::new(to).unwrap();
DkgShare::set(self.txn, genesis, from.into(), to.into(), share);
}
}
// Filter down to only our share's bytes for handle
let our_shares = if let Some(our_i) =
self.spec.i(&removed, Ristretto::generator() * self.our_key.deref())
{
if sender_i == our_i {
vec![]
} else {
// 1-indexed to 0-indexed
let mut our_i_pos = u16::from(our_i.start) - 1;
// Handle the omission of the sender's own data
if u16::from(our_i.start) > u16::from(sender_i.start) {
our_i_pos -= sender_is_len;
}
let our_i_pos = usize::from(our_i_pos);
shares
.iter_mut()
.map(|shares| {
shares
.drain(
our_i_pos ..
(our_i_pos + usize::from(u16::from(our_i.end) - u16::from(our_i.start))),
)
.collect::<Vec<_>>()
})
.collect()
}
} else {
assert!(
not_participating,
"we didn't have an i while handling DkgShares we weren't removed for"
);
// Since we're not participating, simply save vec![] for our shares
vec![]
};
// Drop shares as it's presumably been mutated into invalidity
drop(shares);
let data_spec = DataSpecification { topic: Topic::Dkg, label: Label::Share, attempt };
let encoded_data = (confirmation_nonces.to_vec(), our_shares.encode()).encode();
match self.handle_data(&removed, &data_spec, &encoded_data, &signed) {
Accumulation::Ready(DataSet::Participating(confirmation_nonces_and_shares)) => {
log::info!("got all DkgShares for {}", hex::encode(genesis));
let mut confirmation_nonces = HashMap::new();
let mut shares = HashMap::new();
for (participant, confirmation_nonces_and_shares) in confirmation_nonces_and_shares {
let (these_confirmation_nonces, these_shares) =
<(Vec<u8>, Vec<u8>)>::decode(&mut confirmation_nonces_and_shares.as_slice())
.unwrap();
confirmation_nonces.insert(participant, these_confirmation_nonces);
shares.insert(participant, these_shares);
}
ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces);
// shares is a HashMap<Participant, Vec<Vec<Vec<u8>>>>, with the values representing:
// - Each of the sender's shares
// - Each of the our shares
// - Each share
// We need a Vec<HashMap<Participant, Vec<u8>>>, with the outer being each of ours
let mut expanded_shares = vec![];
for (sender_start_i, shares) in shares {
let shares: Vec<Vec<Vec<u8>>> = Vec::<_>::decode(&mut shares.as_slice()).unwrap();
for (sender_i_offset, our_shares) in shares.into_iter().enumerate() {
for (our_share_i, our_share) in our_shares.into_iter().enumerate() {
if expanded_shares.len() <= our_share_i {
expanded_shares.push(HashMap::new());
}
expanded_shares[our_share_i].insert(
Participant::new(
u16::from(sender_start_i) + u16::try_from(sender_i_offset).unwrap(),
)
.unwrap(),
our_share,
);
}
}
}
self
.processors
.send(
self.spec.set().network,
key_gen::CoordinatorMessage::Shares {
id: KeyGenId { session: self.spec.set().session, attempt },
shares: expanded_shares,
},
)
.await;
}
Accumulation::Ready(DataSet::NotParticipating) => {
assert!(not_participating, "NotParticipating in a DkgShares we weren't removed for");
}
Accumulation::NotReady => {}
}
}
Transaction::InvalidDkgShare { attempt, accuser, faulty, blame, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self
.fatal_slash(signed.signer.to_bytes(), "InvalidDkgShare with an unrecognized attempt");
return;
};
let Some(range) = self.spec.i(&removed, signed.signer) else {
self.fatal_slash(
signed.signer.to_bytes(),
"InvalidDkgShare for a DKG they aren't participating in",
);
return;
};
if !range.contains(&accuser) {
self.fatal_slash(
signed.signer.to_bytes(),
"accused with a Participant index which wasn't theirs",
);
return;
}
if range.contains(&faulty) {
self.fatal_slash(signed.signer.to_bytes(), "accused self of having an InvalidDkgShare");
return;
}
let Some(share) = DkgShare::get(self.txn, genesis, accuser.into(), faulty.into()) else {
self.fatal_slash(
signed.signer.to_bytes(),
"InvalidDkgShare had a non-existent faulty participant",
);
return;
};
self
.processors
.send(
self.spec.set().network,
key_gen::CoordinatorMessage::Participation {
session: self.spec.set().session,
participant: self
.spec
.i(signed.signer)
.expect("signer wasn't a validator for this network?")
.start,
participation,
key_gen::CoordinatorMessage::VerifyBlame {
id: KeyGenId { session: self.spec.set().session, attempt },
accuser,
accused: faulty,
share,
blame,
},
)
.await;
}
Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => {
let data_spec =
DataSpecification { topic: Topic::DkgConfirmation, label: Label::Preprocess, attempt };
match self.handle_data(&data_spec, &confirmation_nonces.to_vec(), &signed) {
Accumulation::Ready(DataSet::Participating(confirmation_nonces)) => {
log::info!(
"got all DkgConfirmationNonces for {}, attempt {attempt}",
hex::encode(genesis)
);
Transaction::DkgConfirmed { attempt, confirmation_share, signed } => {
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
self.fatal_slash(signed.signer.to_bytes(), "DkgConfirmed with an unrecognized attempt");
return;
};
ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces);
// Send the expected DkgConfirmationShare
// TODO: Slight race condition here due to set, publish tx, then commit txn
let key_pair = DkgKeyPair::get(self.txn, genesis)
.expect("participating in confirming key we don't have");
let mut tx = match DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt)
.share(confirmation_nonces, &key_pair)
{
Ok(confirmation_share) => Transaction::DkgConfirmationShare {
attempt,
confirmation_share,
signed: Transaction::empty_signed(),
},
Err(participant) => Transaction::RemoveParticipant {
participant: self.spec.reverse_lookup_i(participant).unwrap(),
signed: Transaction::empty_signed(),
},
};
tx.sign(&mut OsRng, genesis, self.our_key);
self.publish_tributary_tx.publish_tributary_tx(tx).await;
}
Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {}
}
}
Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => {
let data_spec =
DataSpecification { topic: Topic::DkgConfirmation, label: Label::Share, attempt };
match self.handle_data(&data_spec, &confirmation_share.to_vec(), &signed) {
match self.handle_data(&removed, &data_spec, &confirmation_share.to_vec(), &signed) {
Accumulation::Ready(DataSet::Participating(shares)) => {
log::info!(
"got all DkgConfirmationShare for {}, attempt {attempt}",
hex::encode(genesis)
);
log::info!("got all DkgConfirmed for {}", hex::encode(genesis));
let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else {
panic!(
"DkgConfirmed for everyone yet didn't have the removed parties for this attempt",
);
};
let preprocesses = ConfirmationNonces::get(self.txn, genesis, attempt).unwrap();
// TODO: This can technically happen under very very very specific timing as the txn
// put happens before DkgConfirmationShare, yet the txn isn't guaranteed to be
// committed
let key_pair = DkgKeyPair::get(self.txn, genesis).expect(
"in DkgConfirmationShare handling, which happens after everyone \
(including us) fires DkgConfirmationShare, yet no confirming key pair",
// put happens before DkgConfirmed, yet the txn commit isn't guaranteed to
let key_pair = DkgKeyPair::get(self.txn, genesis, attempt).expect(
"in DkgConfirmed handling, which happens after everyone \
(including us) fires DkgConfirmed, yet no confirming key pair",
);
// Determine the bitstring representing who participated before we move `shares`
let validators = self.spec.validators();
let mut signature_participants = bitvec::vec::BitVec::with_capacity(validators.len());
for (participant, _) in validators {
signature_participants.push(
(participant == (<Ristretto as Ciphersuite>::generator() * self.our_key.deref())) ||
shares.contains_key(&self.spec.i(participant).unwrap().start),
);
}
// Produce the final signature
let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt);
let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt)
.expect("confirming DKG for unrecognized attempt");
let sig = match confirmer.complete(preprocesses, &key_pair, shares) {
Ok(sig) => sig,
Err(p) => {
let mut tx = Transaction::RemoveParticipant {
participant: self.spec.reverse_lookup_i(p).unwrap(),
let mut tx = Transaction::RemoveParticipantDueToDkg {
participant: self.spec.reverse_lookup_i(&removed, p).unwrap(),
signed: Transaction::empty_signed(),
};
tx.sign(&mut OsRng, genesis, self.our_key);
@@ -348,18 +545,23 @@ impl<
}
};
DkgLocallyCompleted::set(self.txn, genesis, &());
self
.publish_serai_tx
.publish_set_keys(
self.db,
self.spec.set(),
removed.into_iter().map(|key| key.to_bytes().into()).collect(),
key_pair,
signature_participants,
Signature(sig),
sig.into(),
)
.await;
}
Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {}
Accumulation::Ready(DataSet::NotParticipating) => {
panic!("wasn't a participant in DKG confirmination shares")
}
Accumulation::NotReady => {}
}
}
@@ -417,8 +619,19 @@ impl<
}
Transaction::SubstrateSign(data) => {
// Provided transactions ensure synchrony on any signing protocol, and we won't start
// signing with threshold keys before we've confirmed them on-chain
let Some(removed) =
crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis)
else {
self.fatal_slash(
data.signed.signer.to_bytes(),
"signing despite not having set keys on substrate",
);
return;
};
let signer = data.signed.signer;
let Ok(()) = self.check_sign_data_len(signer, data.data.len()) else {
let Ok(()) = self.check_sign_data_len(&removed, signer, data.data.len()) else {
return;
};
let expected_len = match data.label {
@@ -441,11 +654,11 @@ impl<
attempt: data.attempt,
};
let Accumulation::Ready(DataSet::Participating(mut results)) =
self.handle_data(&data_spec, &data.data.encode(), &data.signed)
self.handle_data(&removed, &data_spec, &data.data.encode(), &data.signed)
else {
return;
};
unflatten(self.spec, &mut results);
unflatten(self.spec, &removed, &mut results);
let id = SubstrateSignId {
session: self.spec.set().session,
@@ -466,7 +679,16 @@ impl<
}
Transaction::Sign(data) => {
let Ok(()) = self.check_sign_data_len(data.signed.signer, data.data.len()) else {
let Some(removed) =
crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis)
else {
self.fatal_slash(
data.signed.signer.to_bytes(),
"signing despite not having set keys on substrate",
);
return;
};
let Ok(()) = self.check_sign_data_len(&removed, data.signed.signer, data.data.len()) else {
return;
};
@@ -476,9 +698,9 @@ impl<
attempt: data.attempt,
};
if let Accumulation::Ready(DataSet::Participating(mut results)) =
self.handle_data(&data_spec, &data.data.encode(), &data.signed)
self.handle_data(&removed, &data_spec, &data.data.encode(), &data.signed)
{
unflatten(self.spec, &mut results);
unflatten(self.spec, &removed, &mut results);
let id =
SignId { session: self.spec.set().session, id: data.plan, attempt: data.attempt };
self
@@ -519,7 +741,8 @@ impl<
}
Transaction::SlashReport(points, signed) => {
let signer_range = self.spec.i(signed.signer).unwrap();
// Uses &[] as we only need the length which is independent to who else was removed
let signer_range = self.spec.i(&[], signed.signer).unwrap();
let signer_len = u16::from(signer_range.end) - u16::from(signer_range.start);
if points.len() != (self.spec.validators().len() - 1) {
self.fatal_slash(

View File

@@ -1,3 +1,8 @@
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::validator_sets::primitives::ExternalValidatorSet;
use tributary::{
ReadWrite,
transaction::{TransactionError, TransactionKind, Transaction as TransactionTrait},
@@ -20,6 +25,39 @@ pub use handle::*;
pub mod scanner;
pub fn removed_as_of_dkg_attempt(
getter: &impl Get,
genesis: [u8; 32],
attempt: u32,
) -> Option<Vec<<Ristretto as Ciphersuite>::G>> {
if attempt == 0 {
Some(vec![])
} else {
RemovedAsOfDkgAttempt::get(getter, genesis, attempt).map(|keys| {
keys.iter().map(|key| <Ristretto as Ciphersuite>::G::from_bytes(key).unwrap()).collect()
})
}
}
pub fn removed_as_of_set_keys(
getter: &impl Get,
set: ExternalValidatorSet,
genesis: [u8; 32],
) -> Option<Vec<<Ristretto as Ciphersuite>::G>> {
// SeraiDkgCompleted has the key placed on-chain.
// This key can be uniquely mapped to an attempt so long as one participant was honest, which we
// assume as a presumably honest participant.
// Resolve from generated key to attempt to fatally slashed as of attempt.
// This expect will trigger if this is prematurely called and Substrate has tracked the keys yet
// we haven't locally synced and handled the Tributary
// All callers of this, at the time of writing, ensure the Tributary has sufficiently synced
// making the panic with context more desirable than the None
let attempt = KeyToDkgAttempt::get(getter, SeraiDkgCompleted::get(getter, set)?)
.expect("key completed on-chain didn't have an attempt related");
removed_as_of_dkg_attempt(getter, genesis, attempt)
}
pub async fn publish_signed_transaction<D: Db, P: crate::P2p>(
txn: &mut D::Transaction<'_>,
tributary: &Tributary<D, Transaction, P>,

View File

@@ -1,18 +1,17 @@
use core::{marker::PhantomData, future::Future, time::Duration};
use std::sync::Arc;
use core::{marker::PhantomData, ops::Deref, future::Future, time::Duration};
use std::{sync::Arc, collections::HashSet};
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use tokio::sync::broadcast;
use scale::{Encode, Decode};
use serai_client::{
primitives::Signature,
validator_sets::primitives::{KeyPair, ValidatorSet},
primitives::{SeraiAddress, Signature},
validator_sets::primitives::{ExternalValidatorSet, KeyPair},
Serai,
};
@@ -40,7 +39,7 @@ pub enum RecognizedIdType {
pub trait RIDTrait {
async fn recognized_id(
&self,
set: ValidatorSet,
set: ExternalValidatorSet,
genesis: [u8; 32],
kind: RecognizedIdType,
id: Vec<u8>,
@@ -49,12 +48,12 @@ pub trait RIDTrait {
#[async_trait::async_trait]
impl<
FRid: Send + Future<Output = ()>,
F: Sync + Fn(ValidatorSet, [u8; 32], RecognizedIdType, Vec<u8>) -> FRid,
F: Sync + Fn(ExternalValidatorSet, [u8; 32], RecognizedIdType, Vec<u8>) -> FRid,
> RIDTrait for F
{
async fn recognized_id(
&self,
set: ValidatorSet,
set: ExternalValidatorSet,
genesis: [u8; 32],
kind: RecognizedIdType,
id: Vec<u8>,
@@ -68,9 +67,9 @@ pub trait PublishSeraiTransaction {
async fn publish_set_keys(
&self,
db: &(impl Sync + Get),
set: ValidatorSet,
set: ExternalValidatorSet,
removed: Vec<SeraiAddress>,
key_pair: KeyPair,
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
signature: Signature,
);
}
@@ -88,7 +87,7 @@ mod impl_pst_for_serai {
async fn publish(
serai: &Serai,
db: &impl Get,
set: ValidatorSet,
set: ExternalValidatorSet,
tx: serai_client::Transaction,
meta: $Meta,
) -> bool {
@@ -130,14 +129,19 @@ mod impl_pst_for_serai {
async fn publish_set_keys(
&self,
db: &(impl Sync + Get),
set: ValidatorSet,
set: ExternalValidatorSet,
removed: Vec<SeraiAddress>,
key_pair: KeyPair,
signature_participants: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
signature: Signature,
) {
let tx =
SeraiValidatorSets::set_keys(set.network, key_pair, signature_participants, signature);
async fn check(serai: SeraiValidatorSets<'_>, set: ValidatorSet, (): ()) -> bool {
// TODO: BoundedVec as an arg to avoid this expect
let tx = SeraiValidatorSets::set_keys(
set.network,
removed.try_into().expect("removing more than allowed"),
key_pair,
signature,
);
async fn check(serai: SeraiValidatorSets<'_>, set: ExternalValidatorSet, (): ()) -> bool {
if matches!(serai.keys(set).await, Ok(Some(_))) {
log::info!("another coordinator set key pair for {:?}", set);
return true;
@@ -246,15 +250,18 @@ impl<
let genesis = self.spec.genesis();
let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis);
// Calculate the shares still present, spinning if not enough are
{
// still_present_shares is used by a below branch, yet it's a natural byproduct of checking if
// we should spin, hence storing it in a variable here
let still_present_shares = {
// Start with the original n value
let mut present_shares = self.spec.n();
let mut present_shares = self.spec.n(&[]);
// Remove everyone fatally slashed
let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis);
for removed in &current_fatal_slashes {
let original_i_for_removed =
self.spec.i(*removed).expect("removed party was never present");
self.spec.i(&[], *removed).expect("removed party was never present");
let removed_shares =
u16::from(original_i_for_removed.end) - u16::from(original_i_for_removed.start);
present_shares -= removed_shares;
@@ -270,17 +277,79 @@ impl<
tokio::time::sleep(core::time::Duration::from_secs(60)).await;
}
}
}
present_shares
};
for topic in ReattemptDb::take(self.txn, genesis, self.block_number) {
let attempt = AttemptDb::start_next_attempt(self.txn, genesis, topic);
log::info!("potentially re-attempting {topic:?} with attempt {attempt}");
log::info!("re-attempting {topic:?} with attempt {attempt}");
// Slash people who failed to participate as expected in the prior attempt
{
let prior_attempt = attempt - 1;
// TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![]
let expected_participants: Vec<<Ristretto as Ciphersuite>::G> = vec![];
let (removed, expected_participants) = match topic {
Topic::Dkg => {
// Every validator who wasn't removed is expected to have participated
let removed =
crate::tributary::removed_as_of_dkg_attempt(self.txn, genesis, prior_attempt)
.expect("prior attempt didn't have its removed saved to disk");
let removed_set = removed.iter().copied().collect::<HashSet<_>>();
(
removed,
self
.spec
.validators()
.into_iter()
.filter_map(|(validator, _)| {
Some(validator).filter(|validator| !removed_set.contains(validator))
})
.collect(),
)
}
Topic::DkgConfirmation => {
panic!("TODO: re-attempting DkgConfirmation when we should be re-attempting the Dkg")
}
Topic::SubstrateSign(_) | Topic::Sign(_) => {
let removed =
crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis)
.expect("SubstrateSign/Sign yet have yet to set keys");
// TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![]
let expected_participants = vec![];
(removed, expected_participants)
}
};
let (expected_topic, expected_label) = match topic {
Topic::Dkg => {
let n = self.spec.n(&removed);
// If we got all the DKG shares, we should be on DKG confirmation
let share_spec =
DataSpecification { topic: Topic::Dkg, label: Label::Share, attempt: prior_attempt };
if DataReceived::get(self.txn, genesis, &share_spec).unwrap_or(0) == n {
// Label::Share since there is no Label::Preprocess for DkgConfirmation since the
// preprocess is part of Topic::Dkg Label::Share
(Topic::DkgConfirmation, Label::Share)
} else {
let preprocess_spec = DataSpecification {
topic: Topic::Dkg,
label: Label::Preprocess,
attempt: prior_attempt,
};
// If we got all the DKG preprocesses, DKG shares
if DataReceived::get(self.txn, genesis, &preprocess_spec).unwrap_or(0) == n {
// Label::Share since there is no Label::Preprocess for DkgConfirmation since the
// preprocess is part of Topic::Dkg Label::Share
(Topic::Dkg, Label::Share)
} else {
(Topic::Dkg, Label::Preprocess)
}
}
}
Topic::DkgConfirmation => unreachable!(),
// If we got enough participants to move forward, then we expect shares from them all
Topic::SubstrateSign(_) | Topic::Sign(_) => (topic, Label::Share),
};
let mut did_not_participate = vec![];
for expected_participant in expected_participants {
@@ -288,9 +357,8 @@ impl<
self.txn,
genesis,
&DataSpecification {
topic,
// Since we got the preprocesses, we were supposed to get the shares
label: Label::Share,
topic: expected_topic,
label: expected_label,
attempt: prior_attempt,
},
&expected_participant.to_bytes(),
@@ -306,8 +374,15 @@ impl<
// Accordingly, clear did_not_participate
// TODO
// TODO: Increment the slash points of people who didn't preprocess in some expected window
// of time
// If during the DKG, explicitly mark these people as having been offline
// TODO: If they were offline sufficiently long ago, don't strike them off
if topic == Topic::Dkg {
let mut existing = OfflineDuringDkg::get(self.txn, genesis).unwrap_or(vec![]);
for did_not_participate in did_not_participate {
existing.push(did_not_participate.to_bytes());
}
OfflineDuringDkg::set(self.txn, genesis, &existing);
}
// Slash everyone who didn't participate as expected
// This may be overzealous as if a minority detects a completion, they'll abort yet the
@@ -337,22 +412,75 @@ impl<
then preprocesses. This only sends preprocesses).
*/
match topic {
Topic::DkgConfirmation => {
if SeraiDkgCompleted::get(self.txn, self.spec.set()).is_none() {
log::info!("re-attempting DKG confirmation with attempt {attempt}");
Topic::Dkg => {
let mut removed = current_fatal_slashes.clone();
// Since it wasn't completed, publish our nonces for the next attempt
let confirmation_nonces =
crate::tributary::dkg_confirmation_nonces(self.our_key, self.spec, self.txn, attempt);
let mut tx = Transaction::DkgConfirmationNonces {
attempt,
confirmation_nonces,
signed: Transaction::empty_signed(),
let t = self.spec.t();
{
let mut present_shares = still_present_shares;
// Load the parties marked as offline across the various attempts
let mut offline = OfflineDuringDkg::get(self.txn, genesis)
.unwrap_or(vec![])
.iter()
.map(|key| <Ristretto as Ciphersuite>::G::from_bytes(key).unwrap())
.collect::<Vec<_>>();
// Pop from the list to prioritize the removal of those recently offline
while let Some(offline) = offline.pop() {
// Make sure they weren't removed already (such as due to being fatally slashed)
// This also may trigger if they were offline across multiple attempts
if removed.contains(&offline) {
continue;
}
// If we can remove them and still meet the threshold, do so
let original_i_for_offline =
self.spec.i(&[], offline).expect("offline was never present?");
let offline_shares =
u16::from(original_i_for_offline.end) - u16::from(original_i_for_offline.start);
if (present_shares - offline_shares) >= t {
present_shares -= offline_shares;
removed.push(offline);
}
// If we've removed as many people as we can, break
if present_shares == t {
break;
}
}
}
RemovedAsOfDkgAttempt::set(
self.txn,
genesis,
attempt,
&removed.iter().map(<Ristretto as Ciphersuite>::G::to_bytes).collect(),
);
if DkgLocallyCompleted::get(self.txn, genesis).is_none() {
let Some(our_i) = self.spec.i(&removed, Ristretto::generator() * self.our_key.deref())
else {
continue;
};
tx.sign(&mut OsRng, genesis, self.our_key);
self.publish_tributary_tx.publish_tributary_tx(tx).await;
// Since it wasn't completed, instruct the processor to start the next attempt
let id =
processor_messages::key_gen::KeyGenId { session: self.spec.set().session, attempt };
let params =
frost::ThresholdParams::new(t, self.spec.n(&removed), our_i.start).unwrap();
let shares = u16::from(our_i.end) - u16::from(our_i.start);
self
.processors
.send(
self.spec.set().network,
processor_messages::key_gen::CoordinatorMessage::GenerateKey { id, params, shares },
)
.await;
}
}
Topic::DkgConfirmation => unreachable!(),
Topic::SubstrateSign(inner_id) => {
let id = processor_messages::coordinator::SubstrateSignId {
session: self.spec.set().session,
@@ -369,8 +497,6 @@ impl<
crate::cosign_evaluator::LatestCosign::get(self.txn, self.spec.set().network)
.map_or(0, |cosign| cosign.block_number);
if latest_cosign < block_number {
log::info!("re-attempting cosigning {block_number:?} with attempt {attempt}");
// Instruct the processor to start the next attempt
self
.processors
@@ -387,8 +513,6 @@ impl<
SubstrateSignableId::Batch(batch) => {
// If the Batch hasn't appeared on-chain...
if BatchInstructionsHashDb::get(self.txn, self.spec.set().network, batch).is_none() {
log::info!("re-attempting signing batch {batch:?} with attempt {attempt}");
// Instruct the processor to start the next attempt
// The processor won't continue if it's already signed a Batch
// Prior checking if the Batch is on-chain just may reduce the non-participating
@@ -406,11 +530,6 @@ impl<
// If this Tributary hasn't been retired...
// (published SlashReport/took too long to do so)
if crate::RetiredTributaryDb::get(self.txn, self.spec.set()).is_none() {
log::info!(
"re-attempting signing slash report for {:?} with attempt {attempt}",
self.spec.set()
);
let report = SlashReport::get(self.txn, self.spec.set())
.expect("re-attempting signing a SlashReport we don't have?");
self
@@ -457,7 +576,8 @@ impl<
};
// Assign them 0 points for themselves
report.insert(i, 0);
let signer_i = self.spec.i(validator).unwrap();
// Uses &[] as we only need the length which is independent to who else was removed
let signer_i = self.spec.i(&[], validator).unwrap();
let signer_len = u16::from(signer_i.end) - u16::from(signer_i.start);
// Push `n` copies, one for each of their shares
for _ in 0 .. signer_len {

View File

@@ -55,7 +55,7 @@
*/
use core::ops::Deref;
use std::collections::{HashSet, HashMap};
use std::collections::HashMap;
use zeroize::{Zeroize, Zeroizing};
@@ -63,18 +63,21 @@ use rand_core::OsRng;
use blake2::{Digest, Blake2s256};
use ciphersuite::{group::ff::PrimeField, Ciphersuite, Ristretto};
use frost::{
FrostError,
dkg::{Participant, musig::musig},
ThresholdKeys,
sign::*,
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::PrimeField, GroupEncoding},
Ciphersuite,
};
use dkg_musig::musig;
use frost::{FrostError, dkg::Participant, ThresholdKeys, sign::*};
use frost_schnorrkel::Schnorrkel;
use scale::Encode;
use serai_client::validator_sets::primitives::{KeyPair, musig_context, set_keys_message};
use serai_client::{
Public,
validator_sets::primitives::{KeyPair, musig_context, set_keys_message},
};
use serai_db::*;
@@ -83,7 +86,6 @@ use crate::tributary::TributarySpec;
create_db!(
SigningProtocolDb {
CachedPreprocesses: (context: &impl Encode) -> [u8; 32]
DataSignedWith: (context: &impl Encode) -> (Vec<u8>, HashMap<Participant, Vec<u8>>),
}
);
@@ -112,22 +114,16 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
};
let encryption_key_slice: &mut [u8] = encryption_key.as_mut();
// Create the MuSig keys
let algorithm = Schnorrkel::new(b"substrate");
let keys: ThresholdKeys<Ristretto> =
musig(&musig_context(self.spec.set()), self.key, participants)
musig(musig_context(self.spec.set().into()), self.key.clone(), participants)
.expect("signing for a set we aren't in/validator present multiple times")
.into();
// Define the algorithm
let algorithm = Schnorrkel::new(b"substrate");
// Check if we've prior preprocessed
if CachedPreprocesses::get(self.txn, &self.context).is_none() {
// If we haven't, we create a machine solely to obtain the preprocess with
let (machine, _) =
AlgorithmMachine::new(algorithm.clone(), keys.clone()).preprocess(&mut OsRng);
// Cache and save the preprocess to disk
let mut cache = machine.cache();
assert_eq!(cache.0.len(), 32);
#[allow(clippy::needless_range_loop)]
@@ -138,15 +134,13 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
CachedPreprocesses::set(self.txn, &self.context, &cache.0);
}
// We're now guaranteed to have the preprocess, hence why this `unwrap` is safe
let cached = CachedPreprocesses::get(self.txn, &self.context).unwrap();
let mut cached = Zeroizing::new(cached);
let mut cached: Zeroizing<[u8; 32]> = Zeroizing::new(cached);
#[allow(clippy::needless_range_loop)]
for b in 0 .. 32 {
cached[b] ^= encryption_key_slice[b];
}
encryption_key_slice.zeroize();
// Create the machine from the cached preprocess
let (machine, preprocess) =
AlgorithmSignMachine::from_cache(algorithm, keys, CachedPreprocess(cached));
@@ -159,29 +153,8 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
mut serialized_preprocesses: HashMap<Participant, Vec<u8>>,
msg: &[u8],
) -> Result<(AlgorithmSignatureMachine<Ristretto, Schnorrkel>, [u8; 32]), Participant> {
// We can't clear the preprocess as we sitll need it to accumulate all of the shares
// We do save the message we signed so any future calls with distinct messages panic
// This assumes the txn deciding this data is committed before the share is broaadcast
if let Some((existing_msg, existing_preprocesses)) =
DataSignedWith::get(self.txn, &self.context)
{
assert_eq!(msg, &existing_msg, "obtaining a signature share for a distinct message");
assert_eq!(
&serialized_preprocesses, &existing_preprocesses,
"obtaining a signature share with a distinct set of preprocesses"
);
} else {
DataSignedWith::set(
self.txn,
&self.context,
&(msg.to_vec(), serialized_preprocesses.clone()),
);
}
let machine = self.preprocess_internal(participants).0;
// Get the preprocessed machine
let (machine, _) = self.preprocess_internal(participants);
// Deserialize all the preprocesses
let mut participants = serialized_preprocesses.keys().copied().collect::<Vec<_>>();
participants.sort();
let mut preprocesses = HashMap::new();
@@ -194,14 +167,13 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
);
}
// Sign the share
let (machine, share) = machine.sign(preprocesses, msg).map_err(|e| match e {
FrostError::InternalError(e) => unreachable!("FrostError::InternalError {e}"),
FrostError::InvalidParticipant(_, _) |
FrostError::InvalidSigningSet(_) |
FrostError::InvalidParticipantQuantity(_, _) |
FrostError::DuplicatedParticipant(_) |
FrostError::MissingParticipant(_) => panic!("unexpected error during sign: {e:?}"),
FrostError::MissingParticipant(_) => unreachable!("{e:?}"),
FrostError::InvalidPreprocess(p) | FrostError::InvalidShare(p) => p,
})?;
@@ -232,24 +204,24 @@ impl<T: DbTxn, C: Encode> SigningProtocol<'_, T, C> {
}
// Get the keys of the participants, noted by their threshold is, and return a new map indexed by
// their MuSig is.
// the MuSig is.
fn threshold_i_map_to_keys_and_musig_i_map(
spec: &TributarySpec,
removed: &[<Ristretto as Ciphersuite>::G],
our_key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
mut map: HashMap<Participant, Vec<u8>>,
) -> (Vec<<Ristretto as Ciphersuite>::G>, HashMap<Participant, Vec<u8>>) {
// Insert our own index so calculations aren't offset
let our_threshold_i = spec
.i(<Ristretto as Ciphersuite>::generator() * our_key.deref())
.expect("not in a set we're signing for")
.i(removed, <Ristretto as Ciphersuite>::generator() * our_key.deref())
.expect("MuSig t-of-n signing a for a protocol we were removed from")
.start;
// Asserts we weren't unexpectedly already present
assert!(map.insert(our_threshold_i, vec![]).is_none());
let spec_validators = spec.validators();
let key_from_threshold_i = |threshold_i| {
for (key, _) in &spec_validators {
if threshold_i == spec.i(*key).expect("validator wasn't in a set they're in").start {
if threshold_i == spec.i(removed, *key).expect("MuSig t-of-n participant was removed").start {
return *key;
}
}
@@ -260,27 +232,19 @@ fn threshold_i_map_to_keys_and_musig_i_map(
let mut threshold_is = map.keys().copied().collect::<Vec<_>>();
threshold_is.sort();
for threshold_i in threshold_is {
sorted.push((
threshold_i,
key_from_threshold_i(threshold_i),
map.remove(&threshold_i).unwrap(),
));
sorted.push((key_from_threshold_i(threshold_i), map.remove(&threshold_i).unwrap()));
}
// Now that signers are sorted, with their shares, create a map with the is needed for MuSig
let mut participants = vec![];
let mut map = HashMap::new();
let mut our_musig_i = None;
for (raw_i, (threshold_i, key, share)) in sorted.into_iter().enumerate() {
let musig_i = Participant::new(u16::try_from(raw_i).unwrap() + 1).unwrap();
if threshold_i == our_threshold_i {
our_musig_i = Some(musig_i);
}
for (raw_i, (key, share)) in sorted.into_iter().enumerate() {
let musig_i = u16::try_from(raw_i).unwrap() + 1;
participants.push(key);
map.insert(musig_i, share);
map.insert(Participant::new(musig_i).unwrap(), share);
}
map.remove(&our_musig_i.unwrap()).unwrap();
map.remove(&our_threshold_i).unwrap();
(participants, map)
}
@@ -290,6 +254,7 @@ type DkgConfirmerSigningProtocol<'a, T> = SigningProtocol<'a, T, (&'static [u8;
pub(crate) struct DkgConfirmer<'a, T: DbTxn> {
key: &'a Zeroizing<<Ristretto as Ciphersuite>::F>,
spec: &'a TributarySpec,
removed: Vec<<Ristretto as Ciphersuite>::G>,
txn: &'a mut T,
attempt: u32,
}
@@ -300,19 +265,19 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
spec: &'a TributarySpec,
txn: &'a mut T,
attempt: u32,
) -> DkgConfirmer<'a, T> {
DkgConfirmer { key, spec, txn, attempt }
) -> Option<DkgConfirmer<'a, T>> {
// This relies on how confirmations are inlined into the DKG protocol and they accordingly
// share attempts
let removed = crate::tributary::removed_as_of_dkg_attempt(txn, spec.genesis(), attempt)?;
Some(DkgConfirmer { key, spec, removed, txn, attempt })
}
fn signing_protocol(&mut self) -> DkgConfirmerSigningProtocol<'_, T> {
let context = (b"DkgConfirmer", self.attempt);
SigningProtocol { key: self.key, spec: self.spec, txn: self.txn, context }
}
fn preprocess_internal(&mut self) -> (AlgorithmSignMachine<Ristretto, Schnorrkel>, [u8; 64]) {
// This preprocesses with just us as we only decide the participants after obtaining
// preprocesses
let participants = vec![<Ristretto as Ciphersuite>::generator() * self.key.deref()];
let participants = self.spec.validators().iter().map(|val| val.0).collect::<Vec<_>>();
self.signing_protocol().preprocess_internal(&participants)
}
// Get the preprocess for this confirmation.
@@ -325,9 +290,14 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
preprocesses: HashMap<Participant, Vec<u8>>,
key_pair: &KeyPair,
) -> Result<(AlgorithmSignatureMachine<Ristretto, Schnorrkel>, [u8; 32]), Participant> {
let (participants, preprocesses) =
threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, preprocesses);
let msg = set_keys_message(&self.spec.set(), key_pair);
let participants = self.spec.validators().iter().map(|val| val.0).collect::<Vec<_>>();
let preprocesses =
threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, preprocesses).1;
let msg = set_keys_message(
&self.spec.set(),
&self.removed.iter().map(|key| Public::from(key.to_bytes())).collect::<Vec<_>>(),
key_pair,
);
self.signing_protocol().share_internal(&participants, preprocesses, &msg)
}
// Get the share for this confirmation, if the preprocesses are valid.
@@ -345,9 +315,8 @@ impl<T: DbTxn> DkgConfirmer<'_, T> {
key_pair: &KeyPair,
shares: HashMap<Participant, Vec<u8>>,
) -> Result<[u8; 64], Participant> {
assert_eq!(preprocesses.keys().collect::<HashSet<_>>(), shares.keys().collect::<HashSet<_>>());
let shares = threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, shares).1;
let shares =
threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, shares).1;
let machine = self
.share_internal(preprocesses, key_pair)

View File

@@ -3,13 +3,14 @@ use std::{io, collections::HashMap};
use transcript::{Transcript, RecommendedTranscript};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use frost::Participant;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::validator_sets::primitives::ValidatorSet;
use serai_client::{primitives::PublicKey, validator_sets::primitives::ExternalValidatorSet};
fn borsh_serialize_validators<W: io::Write>(
validators: &Vec<(<Ristretto as Ciphersuite>::G, u16)>,
@@ -43,27 +44,32 @@ fn borsh_deserialize_validators<R: io::Read>(
pub struct TributarySpec {
serai_block: [u8; 32],
start_time: u64,
set: ValidatorSet,
set: ExternalValidatorSet,
#[borsh(
serialize_with = "borsh_serialize_validators",
deserialize_with = "borsh_deserialize_validators"
)]
validators: Vec<(<Ristretto as Ciphersuite>::G, u16)>,
evrf_public_keys: Vec<([u8; 32], Vec<u8>)>,
}
impl TributarySpec {
pub fn new(
serai_block: [u8; 32],
start_time: u64,
set: ValidatorSet,
validators: Vec<(<Ristretto as Ciphersuite>::G, u16)>,
evrf_public_keys: Vec<([u8; 32], Vec<u8>)>,
set: ExternalValidatorSet,
set_participants: Vec<(PublicKey, u16)>,
) -> TributarySpec {
Self { serai_block, start_time, set, validators, evrf_public_keys }
let mut validators = vec![];
for (participant, shares) in set_participants {
let participant = <Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut participant.0.as_ref())
.expect("invalid key registered as participant");
validators.push((participant, shares));
}
Self { serai_block, start_time, set, validators }
}
pub fn set(&self) -> ValidatorSet {
pub fn set(&self) -> ExternalValidatorSet {
self.set
}
@@ -83,15 +89,24 @@ impl TributarySpec {
self.start_time
}
pub fn n(&self) -> u16 {
self.validators.iter().map(|(_, weight)| *weight).sum()
pub fn n(&self, removed_validators: &[<Ristretto as Ciphersuite>::G]) -> u16 {
self
.validators
.iter()
.map(|(validator, weight)| if removed_validators.contains(validator) { 0 } else { *weight })
.sum()
}
pub fn t(&self) -> u16 {
((2 * self.n()) / 3) + 1
// t doesn't change with regards to the amount of removed validators
((2 * self.n(&[])) / 3) + 1
}
pub fn i(&self, key: <Ristretto as Ciphersuite>::G) -> Option<Range<Participant>> {
pub fn i(
&self,
removed_validators: &[<Ristretto as Ciphersuite>::G],
key: <Ristretto as Ciphersuite>::G,
) -> Option<Range<Participant>> {
let mut all_is = HashMap::new();
let mut i = 1;
for (validator, weight) in &self.validators {
@@ -102,12 +117,34 @@ impl TributarySpec {
i += weight;
}
Some(all_is.get(&key)?.clone())
let original_i = all_is.get(&key)?.clone();
let mut result_i = original_i.clone();
for removed_validator in removed_validators {
let removed_i = all_is
.get(removed_validator)
.expect("removed validator wasn't present in set to begin with");
// If the queried key was removed, return None
if &original_i == removed_i {
return None;
}
// If the removed was before the queried, shift the queried down accordingly
if removed_i.start < original_i.start {
let removed_shares = u16::from(removed_i.end) - u16::from(removed_i.start);
result_i.start = Participant::new(u16::from(original_i.start) - removed_shares).unwrap();
result_i.end = Participant::new(u16::from(original_i.end) - removed_shares).unwrap();
}
}
Some(result_i)
}
pub fn reverse_lookup_i(&self, i: Participant) -> Option<<Ristretto as Ciphersuite>::G> {
pub fn reverse_lookup_i(
&self,
removed_validators: &[<Ristretto as Ciphersuite>::G],
i: Participant,
) -> Option<<Ristretto as Ciphersuite>::G> {
for (validator, _) in &self.validators {
if self.i(*validator).map_or(false, |range| range.contains(&i)) {
if self.i(removed_validators, *validator).map_or(false, |range| range.contains(&i)) {
return Some(*validator);
}
}
@@ -117,8 +154,4 @@ impl TributarySpec {
pub fn validators(&self) -> Vec<(<Ristretto as Ciphersuite>::G, u64)> {
self.validators.iter().map(|(validator, weight)| (*validator, u64::from(*weight))).collect()
}
pub fn evrf_public_keys(&self) -> Vec<([u8; 32], Vec<u8>)> {
self.evrf_public_keys.clone()
}
}

View File

@@ -7,11 +7,13 @@ use rand_core::{RngCore, CryptoRng};
use blake2::{Digest, Blake2s256};
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, GroupEncoding},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;
use frost::Participant;
use scale::{Encode, Decode};
use processor_messages::coordinator::SubstrateSignableId;
@@ -129,26 +131,32 @@ impl<Id: Clone + PartialEq + Eq + Debug + Encode + Decode> SignData<Id> {
#[derive(Clone, PartialEq, Eq)]
pub enum Transaction {
RemoveParticipant {
RemoveParticipantDueToDkg {
participant: <Ristretto as Ciphersuite>::G,
signed: Signed,
},
DkgParticipation {
participation: Vec<u8>,
DkgCommitments {
attempt: u32,
commitments: Vec<Vec<u8>>,
signed: Signed,
},
DkgConfirmationNonces {
// The confirmation attempt
DkgShares {
attempt: u32,
// The nonces for DKG confirmation attempt #attempt
// Sending Participant, Receiving Participant, Share
shares: Vec<Vec<Vec<u8>>>,
confirmation_nonces: [u8; 64],
signed: Signed,
},
DkgConfirmationShare {
// The confirmation attempt
InvalidDkgShare {
attempt: u32,
accuser: Participant,
faulty: Participant,
blame: Option<Vec<u8>>,
signed: Signed,
},
DkgConfirmed {
attempt: u32,
// The share for DKG confirmation attempt #attempt
confirmation_share: [u8; 32],
signed: Signed,
},
@@ -190,22 +198,29 @@ pub enum Transaction {
impl Debug for Transaction {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
match self {
Transaction::RemoveParticipant { participant, signed } => fmt
.debug_struct("Transaction::RemoveParticipant")
Transaction::RemoveParticipantDueToDkg { participant, signed } => fmt
.debug_struct("Transaction::RemoveParticipantDueToDkg")
.field("participant", &hex::encode(participant.to_bytes()))
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::DkgParticipation { signed, .. } => fmt
.debug_struct("Transaction::DkgParticipation")
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::DkgConfirmationNonces { attempt, signed, .. } => fmt
.debug_struct("Transaction::DkgConfirmationNonces")
Transaction::DkgCommitments { attempt, commitments: _, signed } => fmt
.debug_struct("Transaction::DkgCommitments")
.field("attempt", attempt)
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::DkgConfirmationShare { attempt, signed, .. } => fmt
.debug_struct("Transaction::DkgConfirmationShare")
Transaction::DkgShares { attempt, signed, .. } => fmt
.debug_struct("Transaction::DkgShares")
.field("attempt", attempt)
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
Transaction::InvalidDkgShare { attempt, accuser, faulty, .. } => fmt
.debug_struct("Transaction::InvalidDkgShare")
.field("attempt", attempt)
.field("accuser", accuser)
.field("faulty", faulty)
.finish_non_exhaustive(),
Transaction::DkgConfirmed { attempt, confirmation_share: _, signed } => fmt
.debug_struct("Transaction::DkgConfirmed")
.field("attempt", attempt)
.field("signer", &hex::encode(signed.signer.to_bytes()))
.finish_non_exhaustive(),
@@ -247,32 +262,43 @@ impl ReadWrite for Transaction {
reader.read_exact(&mut kind)?;
match kind[0] {
0 => Ok(Transaction::RemoveParticipant {
0 => Ok(Transaction::RemoveParticipantDueToDkg {
participant: Ristretto::read_G(reader)?,
signed: Signed::read_without_nonce(reader, 0)?,
}),
1 => {
let participation = {
let mut participation_len = [0; 4];
reader.read_exact(&mut participation_len)?;
let participation_len = u32::from_le_bytes(participation_len);
let mut attempt = [0; 4];
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
if participation_len > u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() {
let commitments = {
let mut commitments_len = [0; 1];
reader.read_exact(&mut commitments_len)?;
let commitments_len = usize::from(commitments_len[0]);
if commitments_len == 0 {
Err(io::Error::other("zero commitments in DkgCommitments"))?;
}
let mut each_commitments_len = [0; 2];
reader.read_exact(&mut each_commitments_len)?;
let each_commitments_len = usize::from(u16::from_le_bytes(each_commitments_len));
if (commitments_len * each_commitments_len) > TRANSACTION_SIZE_LIMIT {
Err(io::Error::other(
"participation present in transaction exceeded transaction size limit",
"commitments present in transaction exceeded transaction size limit",
))?;
}
let participation_len = usize::try_from(participation_len).unwrap();
let mut participation = vec![0; participation_len];
reader.read_exact(&mut participation)?;
participation
let mut commitments = vec![vec![]; commitments_len];
for commitments in &mut commitments {
*commitments = vec![0; each_commitments_len];
reader.read_exact(commitments)?;
}
commitments
};
let signed = Signed::read_without_nonce(reader, 0)?;
Ok(Transaction::DkgParticipation { participation, signed })
Ok(Transaction::DkgCommitments { attempt, commitments, signed })
}
2 => {
@@ -280,12 +306,36 @@ impl ReadWrite for Transaction {
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let shares = {
let mut share_quantity = [0; 1];
reader.read_exact(&mut share_quantity)?;
let mut key_share_quantity = [0; 1];
reader.read_exact(&mut key_share_quantity)?;
let mut share_len = [0; 2];
reader.read_exact(&mut share_len)?;
let share_len = usize::from(u16::from_le_bytes(share_len));
let mut all_shares = vec![];
for _ in 0 .. share_quantity[0] {
let mut shares = vec![];
for _ in 0 .. key_share_quantity[0] {
let mut share = vec![0; share_len];
reader.read_exact(&mut share)?;
shares.push(share);
}
all_shares.push(shares);
}
all_shares
};
let mut confirmation_nonces = [0; 64];
reader.read_exact(&mut confirmation_nonces)?;
let signed = Signed::read_without_nonce(reader, 0)?;
let signed = Signed::read_without_nonce(reader, 1)?;
Ok(Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed })
Ok(Transaction::DkgShares { attempt, shares, confirmation_nonces, signed })
}
3 => {
@@ -293,21 +343,53 @@ impl ReadWrite for Transaction {
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let mut confirmation_share = [0; 32];
reader.read_exact(&mut confirmation_share)?;
let mut accuser = [0; 2];
reader.read_exact(&mut accuser)?;
let accuser = Participant::new(u16::from_le_bytes(accuser))
.ok_or_else(|| io::Error::other("invalid participant in InvalidDkgShare"))?;
let signed = Signed::read_without_nonce(reader, 1)?;
let mut faulty = [0; 2];
reader.read_exact(&mut faulty)?;
let faulty = Participant::new(u16::from_le_bytes(faulty))
.ok_or_else(|| io::Error::other("invalid participant in InvalidDkgShare"))?;
Ok(Transaction::DkgConfirmationShare { attempt, confirmation_share, signed })
let mut blame_len = [0; 2];
reader.read_exact(&mut blame_len)?;
let mut blame = vec![0; u16::from_le_bytes(blame_len).into()];
reader.read_exact(&mut blame)?;
// This shares a nonce with DkgConfirmed as only one is expected
let signed = Signed::read_without_nonce(reader, 2)?;
Ok(Transaction::InvalidDkgShare {
attempt,
accuser,
faulty,
blame: Some(blame).filter(|blame| !blame.is_empty()),
signed,
})
}
4 => {
let mut attempt = [0; 4];
reader.read_exact(&mut attempt)?;
let attempt = u32::from_le_bytes(attempt);
let mut confirmation_share = [0; 32];
reader.read_exact(&mut confirmation_share)?;
let signed = Signed::read_without_nonce(reader, 2)?;
Ok(Transaction::DkgConfirmed { attempt, confirmation_share, signed })
}
5 => {
let mut block = [0; 32];
reader.read_exact(&mut block)?;
Ok(Transaction::CosignSubstrateBlock(block))
}
5 => {
6 => {
let mut block = [0; 32];
reader.read_exact(&mut block)?;
let mut batch = [0; 4];
@@ -315,16 +397,16 @@ impl ReadWrite for Transaction {
Ok(Transaction::Batch { block, batch: u32::from_le_bytes(batch) })
}
6 => {
7 => {
let mut block = [0; 8];
reader.read_exact(&mut block)?;
Ok(Transaction::SubstrateBlock(u64::from_le_bytes(block)))
}
7 => SignData::read(reader).map(Transaction::SubstrateSign),
8 => SignData::read(reader).map(Transaction::Sign),
8 => SignData::read(reader).map(Transaction::SubstrateSign),
9 => SignData::read(reader).map(Transaction::Sign),
9 => {
10 => {
let mut plan = [0; 32];
reader.read_exact(&mut plan)?;
@@ -339,7 +421,7 @@ impl ReadWrite for Transaction {
Ok(Transaction::SignCompleted { plan, tx_hash, first_signer, signature })
}
10 => {
11 => {
let mut len = [0];
reader.read_exact(&mut len)?;
let len = len[0];
@@ -364,59 +446,109 @@ impl ReadWrite for Transaction {
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self {
Transaction::RemoveParticipant { participant, signed } => {
Transaction::RemoveParticipantDueToDkg { participant, signed } => {
writer.write_all(&[0])?;
writer.write_all(&participant.to_bytes())?;
signed.write_without_nonce(writer)
}
Transaction::DkgParticipation { participation, signed } => {
Transaction::DkgCommitments { attempt, commitments, signed } => {
writer.write_all(&[1])?;
writer.write_all(&u32::try_from(participation.len()).unwrap().to_le_bytes())?;
writer.write_all(participation)?;
writer.write_all(&attempt.to_le_bytes())?;
if commitments.is_empty() {
Err(io::Error::other("zero commitments in DkgCommitments"))?
}
writer.write_all(&[u8::try_from(commitments.len()).unwrap()])?;
for commitments_i in commitments {
if commitments_i.len() != commitments[0].len() {
Err(io::Error::other("commitments of differing sizes in DkgCommitments"))?
}
}
writer.write_all(&u16::try_from(commitments[0].len()).unwrap().to_le_bytes())?;
for commitments in commitments {
writer.write_all(commitments)?;
}
signed.write_without_nonce(writer)
}
Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => {
Transaction::DkgShares { attempt, shares, confirmation_nonces, signed } => {
writer.write_all(&[2])?;
writer.write_all(&attempt.to_le_bytes())?;
// `shares` is a Vec which is supposed to map to a HashMap<Participant, Vec<u8>>. Since we
// bound participants to 150, this conversion is safe if a valid in-memory transaction.
writer.write_all(&[u8::try_from(shares.len()).unwrap()])?;
// This assumes at least one share is being sent to another party
writer.write_all(&[u8::try_from(shares[0].len()).unwrap()])?;
let share_len = shares[0][0].len();
// For BLS12-381 G2, this would be:
// - A 32-byte share
// - A 96-byte ephemeral key
// - A 128-byte signature
// Hence why this has to be u16
writer.write_all(&u16::try_from(share_len).unwrap().to_le_bytes())?;
for these_shares in shares {
assert_eq!(these_shares.len(), shares[0].len(), "amount of sent shares was variable");
for share in these_shares {
assert_eq!(share.len(), share_len, "sent shares were of variable length");
writer.write_all(share)?;
}
}
writer.write_all(confirmation_nonces)?;
signed.write_without_nonce(writer)
}
Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => {
Transaction::InvalidDkgShare { attempt, accuser, faulty, blame, signed } => {
writer.write_all(&[3])?;
writer.write_all(&attempt.to_le_bytes())?;
writer.write_all(&u16::from(*accuser).to_le_bytes())?;
writer.write_all(&u16::from(*faulty).to_le_bytes())?;
// Flattens Some(vec![]) to None on the expectation no actual blame will be 0-length
assert!(blame.as_ref().map_or(1, Vec::len) != 0);
let blame_len =
u16::try_from(blame.as_ref().unwrap_or(&vec![]).len()).expect("blame exceeded 64 KB");
writer.write_all(&blame_len.to_le_bytes())?;
writer.write_all(blame.as_ref().unwrap_or(&vec![]))?;
signed.write_without_nonce(writer)
}
Transaction::DkgConfirmed { attempt, confirmation_share, signed } => {
writer.write_all(&[4])?;
writer.write_all(&attempt.to_le_bytes())?;
writer.write_all(confirmation_share)?;
signed.write_without_nonce(writer)
}
Transaction::CosignSubstrateBlock(block) => {
writer.write_all(&[4])?;
writer.write_all(&[5])?;
writer.write_all(block)
}
Transaction::Batch { block, batch } => {
writer.write_all(&[5])?;
writer.write_all(&[6])?;
writer.write_all(block)?;
writer.write_all(&batch.to_le_bytes())
}
Transaction::SubstrateBlock(block) => {
writer.write_all(&[6])?;
writer.write_all(&[7])?;
writer.write_all(&block.to_le_bytes())
}
Transaction::SubstrateSign(data) => {
writer.write_all(&[7])?;
data.write(writer)
}
Transaction::Sign(data) => {
writer.write_all(&[8])?;
data.write(writer)
}
Transaction::SignCompleted { plan, tx_hash, first_signer, signature } => {
Transaction::Sign(data) => {
writer.write_all(&[9])?;
data.write(writer)
}
Transaction::SignCompleted { plan, tx_hash, first_signer, signature } => {
writer.write_all(&[10])?;
writer.write_all(plan)?;
writer
.write_all(&[u8::try_from(tx_hash.len()).expect("tx hash length exceed 255 bytes")])?;
@@ -425,7 +557,7 @@ impl ReadWrite for Transaction {
signature.write(writer)
}
Transaction::SlashReport(points, signed) => {
writer.write_all(&[10])?;
writer.write_all(&[11])?;
writer.write_all(&[u8::try_from(points.len()).unwrap()])?;
for points in points {
writer.write_all(&points.to_le_bytes())?;
@@ -439,16 +571,15 @@ impl ReadWrite for Transaction {
impl TransactionTrait for Transaction {
fn kind(&self) -> TransactionKind<'_> {
match self {
Transaction::RemoveParticipant { participant, signed } => {
Transaction::RemoveParticipantDueToDkg { participant, signed } => {
TransactionKind::Signed((b"remove", participant.to_bytes()).encode(), signed)
}
Transaction::DkgParticipation { signed, .. } => {
TransactionKind::Signed(b"dkg".to_vec(), signed)
}
Transaction::DkgConfirmationNonces { attempt, signed, .. } |
Transaction::DkgConfirmationShare { attempt, signed, .. } => {
TransactionKind::Signed((b"dkg_confirmation", attempt).encode(), signed)
Transaction::DkgCommitments { attempt, commitments: _, signed } |
Transaction::DkgShares { attempt, signed, .. } |
Transaction::InvalidDkgShare { attempt, signed, .. } |
Transaction::DkgConfirmed { attempt, signed, .. } => {
TransactionKind::Signed((b"dkg", attempt).encode(), signed)
}
Transaction::CosignSubstrateBlock(_) => TransactionKind::Provided("cosign"),
@@ -515,14 +646,11 @@ impl Transaction {
fn signed(tx: &mut Transaction) -> (u32, &mut Signed) {
#[allow(clippy::match_same_arms)] // Doesn't make semantic sense here
let nonce = match tx {
Transaction::RemoveParticipant { .. } => 0,
Transaction::RemoveParticipantDueToDkg { .. } => 0,
Transaction::DkgParticipation { .. } => 0,
// Uses a nonce of 0 as it has an internal attempt counter we distinguish by
Transaction::DkgConfirmationNonces { .. } => 0,
// Uses a nonce of 1 due to internal attempt counter and due to following
// DkgConfirmationNonces
Transaction::DkgConfirmationShare { .. } => 1,
Transaction::DkgCommitments { .. } => 0,
Transaction::DkgShares { .. } => 1,
Transaction::InvalidDkgShare { .. } | Transaction::DkgConfirmed { .. } => 2,
Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"),
@@ -541,10 +669,11 @@ impl Transaction {
nonce,
#[allow(clippy::match_same_arms)]
match tx {
Transaction::RemoveParticipant { ref mut signed, .. } |
Transaction::DkgParticipation { ref mut signed, .. } |
Transaction::DkgConfirmationNonces { ref mut signed, .. } => signed,
Transaction::DkgConfirmationShare { ref mut signed, .. } => signed,
Transaction::RemoveParticipantDueToDkg { ref mut signed, .. } |
Transaction::DkgCommitments { ref mut signed, .. } |
Transaction::DkgShares { ref mut signed, .. } |
Transaction::InvalidDkgShare { ref mut signed, .. } |
Transaction::DkgConfirmed { ref mut signed, .. } => signed,
Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"),

View File

@@ -27,7 +27,8 @@ rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] }
ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] }
dalek-ff-group = { path = "../../crypto/dalek-ff-group" }
ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,7 @@
use std::collections::{VecDeque, HashSet};
use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_db::{Get, DbTxn, Db};

View File

@@ -5,7 +5,8 @@ use async_trait::async_trait;
use zeroize::Zeroizing;
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use scale::Decode;
use futures_channel::mpsc::UnboundedReceiver;

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap;
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use serai_db::{DbTxn, Db};

View File

@@ -11,12 +11,13 @@ use rand_chacha::ChaCha12Rng;
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{
GroupEncoding,
ff::{Field, PrimeField},
},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::{
SchnorrSignature,

View File

@@ -4,7 +4,8 @@ use scale::{Encode, Decode, IoReader};
use blake2::{Digest, Blake2s256};
use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use crate::{
transaction::{Transaction, TransactionKind, TransactionError},

View File

@@ -1,9 +1,11 @@
use std::{sync::Arc, io, collections::HashMap, fmt::Debug};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;

View File

@@ -10,7 +10,8 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use serai_db::{DbTxn, Db, MemDb};

View File

@@ -3,7 +3,8 @@ use std::{sync::Arc, collections::HashMap};
use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use tendermint::ext::Commit;

View File

@@ -6,9 +6,10 @@ use rand::{RngCore, CryptoRng, rngs::OsRng};
use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::Field, Group},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;

View File

@@ -2,7 +2,8 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256};
use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use crate::{
ReadWrite,

View File

@@ -3,7 +3,8 @@ use std::sync::Arc;
use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng};
use ciphersuite::{Ristretto, Ciphersuite, group::ff::Field};
use dalek_ff_group::Ristretto;
use ciphersuite::{Ciphersuite, group::ff::Field};
use scale::Encode;

View File

@@ -6,9 +6,10 @@ use thiserror::Error;
use blake2::{Digest, Blake2b512};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{Group, GroupEncoding},
Ciphersuite, Ristretto,
Ciphersuite,
};
use schnorr::SchnorrSignature;

View File

@@ -25,7 +25,7 @@ parity-scale-codec = { version = "3", default-features = false, features = ["std
futures-util = { version = "0.3", default-features = false, features = ["std", "async-await-macro", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }
tokio = { version = "1", default-features = false, features = ["time"] }
patchable-async-sleep = { version = "0.1", path = "../../../common/patchable-async-sleep", default-features = false }
serai-db = { path = "../../../common/db", version = "0.1", default-features = false }

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug;
use std::{
@@ -13,7 +15,7 @@ use futures_util::{
FutureExt, StreamExt, SinkExt,
future::{self, Fuse},
};
use tokio::time::sleep;
use patchable_async_sleep::sleep;
use serai_db::{Get, DbTxn, Db};
@@ -708,9 +710,17 @@ impl<N: Network + 'static> TendermintMachine<N> {
if let Data::Proposal(_, block) = &msg.data {
match self.network.validate(block).await {
Ok(()) => {}
Err(BlockError::Temporal) => return Err(TendermintError::Temporal),
Err(BlockError::Temporal) => {
if self.block.round().step == Step::Propose {
self.broadcast(Data::Prevote(None));
}
Err(TendermintError::Temporal)?;
}
Err(BlockError::Fatal) => {
log::warn!(target: "tendermint", "validator proposed a fatally invalid block");
if self.block.round().step == Step::Propose {
self.broadcast(Data::Prevote(None));
}
self
.slash(
msg.sender,
@@ -729,6 +739,9 @@ impl<N: Network + 'static> TendermintMachine<N> {
target: "tendermint",
"proposed proposed with a syntactically invalid valid round",
);
if self.block.round().step == Step::Propose {
self.broadcast(Data::Prevote(None));
}
self
.slash(msg.sender, SlashEvent::WithEvidence(Evidence::InvalidValidRound(msg.encode())))
.await;

View File

@@ -5,7 +5,7 @@ use std::{
};
use futures_util::{FutureExt, future};
use tokio::time::sleep;
use patchable_async_sleep::sleep;
use crate::{
time::CanonicalInstant,

View File

@@ -1,13 +1,13 @@
[package]
name = "ciphersuite"
version = "0.4.1"
version = "0.4.2"
description = "Ciphersuites built around ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.74"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
@@ -24,22 +24,12 @@ rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
subtle = { version = "^2.4", default-features = false }
digest = { version = "0.10", default-features = false }
digest = { version = "0.10", default-features = false, features = ["core-api"] }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false }
sha2 = { version = "0.10", default-features = false, optional = true }
sha3 = { version = "0.10", default-features = false, optional = true }
ff = { version = "0.13", default-features = false, features = ["bits"] }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../dalek-ff-group", version = "0.4", default-features = false, optional = true }
elliptic-curve = { version = "0.13", default-features = false, features = ["hash2curve"], optional = true }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"], optional = true }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"], optional = true }
minimal-ed448 = { path = "../ed448", version = "0.4", default-features = false, optional = true }
[dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] }
@@ -48,7 +38,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../ff-group-tests" }
[features]
alloc = ["std-shims"]
alloc = ["std-shims", "ff/alloc"]
std = [
"std-shims/std",
@@ -59,27 +49,8 @@ std = [
"digest/std",
"transcript/std",
"sha2?/std",
"sha3?/std",
"ff/std",
"dalek-ff-group?/std",
"elliptic-curve?/std",
"p256?/std",
"k256?/std",
"minimal-ed448?/std",
]
dalek = ["sha2", "dalek-ff-group"]
ed25519 = ["dalek"]
ristretto = ["dalek"]
kp256 = ["sha2", "elliptic-curve"]
p256 = ["kp256", "dep:p256"]
secp256k1 = ["kp256", "k256"]
ed448 = ["sha3", "minimal-ed448"]
default = ["std"]

View File

@@ -21,6 +21,8 @@ Their `hash_to_F` is the
[IETF's hash to curve](https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html),
yet applied to their scalar field.
Please see the [`ciphersuite-kp256`](https://docs.rs/ciphersuite-kp256) crate for more info.
### Ed25519/Ristretto
Ed25519/Ristretto are offered via
@@ -33,6 +35,8 @@ the draft
[RFC-RISTRETTO](https://www.ietf.org/archive/id/draft-irtf-cfrg-ristretto255-decaf448-05.html).
The domain-separation tag is naively prefixed to the message.
Please see the [`dalek-ff-group`](https://docs.rs/dalek-ff-group) crate for more info.
### Ed448
Ed448 is offered via [minimal-ed448](https://crates.io/crates/minimal-ed448), an
@@ -42,3 +46,5 @@ to its prime-order subgroup.
Its `hash_to_F` is the wide reduction of SHAKE256, with a 114-byte output, as
used in [RFC-8032](https://www.rfc-editor.org/rfc/rfc8032). The
domain-separation tag is naively prefixed to the message.
Please see the [`minimal-ed448`](https://docs.rs/minimal-ed448) crate for more info.

View File

@@ -0,0 +1,55 @@
[package]
name = "ciphersuite-kp256"
version = "0.4.0"
description = "Ciphersuites built around ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite/kp256"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
sha2 = { version = "0.10", default-features = false }
elliptic-curve = { version = "0.13", default-features = false, features = ["hash2curve"] }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
ciphersuite = { path = "../", version = "0.4", default-features = false }
[dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../../ff-group-tests" }
[features]
alloc = ["ciphersuite/alloc"]
std = [
"rand_core/std",
"zeroize/std",
"sha2/std",
"elliptic-curve/std",
"p256/std",
"k256/std",
"ciphersuite/std",
]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023-2024 Luke Parker
Copyright (c) 2021-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,3 @@
# Ciphersuite {k, p}256
SECP256k1 and P-256 Ciphersuites around k256 and p256.

View File

@@ -1,16 +1,17 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
use zeroize::Zeroize;
use sha2::Sha256;
use group::ff::PrimeField;
use elliptic_curve::{
generic_array::GenericArray,
bigint::{NonZero, CheckedAdd, Encoding, U384},
hash2curve::{Expander, ExpandMsg, ExpandMsgXmd},
};
use crate::Ciphersuite;
use ciphersuite::{group::ff::PrimeField, Ciphersuite};
macro_rules! kp_curve {
(
@@ -107,12 +108,9 @@ fn test_oversize_dst<C: Ciphersuite>() {
/// Ciphersuite for Secp256k1.
///
/// hash_to_F is implemented via the IETF draft for hash to curve's hash_to_field (v16).
#[cfg(feature = "secp256k1")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Secp256k1;
#[cfg(feature = "secp256k1")]
kp_curve!("secp256k1", k256, Secp256k1, b"secp256k1");
#[cfg(feature = "secp256k1")]
#[test]
fn test_secp256k1() {
ff_group_tests::group::test_prime_group_bits::<_, k256::ProjectivePoint>(&mut rand_core::OsRng);
@@ -145,12 +143,9 @@ fn test_secp256k1() {
/// Ciphersuite for P-256.
///
/// hash_to_F is implemented via the IETF draft for hash to curve's hash_to_field (v16).
#[cfg(feature = "p256")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct P256;
#[cfg(feature = "p256")]
kp_curve!("p256", p256, P256, b"P-256");
#[cfg(feature = "p256")]
#[test]
fn test_p256() {
ff_group_tests::group::test_prime_group_bits::<_, p256::ProjectivePoint>(&mut rand_core::OsRng);

View File

@@ -2,7 +2,7 @@
Ciphersuites for elliptic curves premised on ff/group.
This library, except for the not recommended Ed448 ciphersuite, was
This library was
[audited by Cypher Stack in March 2023](https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf),
culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06).

View File

@@ -1,9 +1,12 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("lib.md")]
#![cfg_attr(not(feature = "std"), no_std)]
use core::fmt::Debug;
#[cfg(any(feature = "alloc", feature = "std"))]
#[allow(unused_imports)]
use std_shims::prelude::*;
#[cfg(any(feature = "alloc", feature = "std"))]
use std_shims::io::{self, Read};
use rand_core::{RngCore, CryptoRng};
@@ -23,25 +26,6 @@ use group::{
#[cfg(any(feature = "alloc", feature = "std"))]
use group::GroupEncoding;
#[cfg(feature = "dalek")]
mod dalek;
#[cfg(feature = "ristretto")]
pub use dalek::Ristretto;
#[cfg(feature = "ed25519")]
pub use dalek::Ed25519;
#[cfg(feature = "kp256")]
mod kp256;
#[cfg(feature = "secp256k1")]
pub use kp256::Secp256k1;
#[cfg(feature = "p256")]
pub use kp256::P256;
#[cfg(feature = "ed448")]
mod ed448;
#[cfg(feature = "ed448")]
pub use ed448::*;
/// Unified trait defining a ciphersuite around an elliptic curve.
pub trait Ciphersuite:
'static + Send + Sync + Clone + Copy + PartialEq + Eq + Debug + Zeroize
@@ -99,6 +83,9 @@ pub trait Ciphersuite:
}
/// Read a canonical point from something implementing std::io::Read.
///
/// The provided implementation is safe so long as `GroupEncoding::to_bytes` always returns a
/// canonical serialization.
#[cfg(any(feature = "alloc", feature = "std"))]
#[allow(non_snake_case)]
fn read_G<R: Read>(reader: &mut R) -> io::Result<Self::G> {

View File

@@ -1,13 +1,13 @@
[package]
name = "dalek-ff-group"
version = "0.4.1"
version = "0.4.4"
description = "ff/group bindings around curve25519-dalek"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dalek-ff-group"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["curve25519", "ed25519", "ristretto", "dalek", "group"]
edition = "2021"
rust-version = "1.66"
rust-version = "1.65"
[package.metadata.docs.rs]
all-features = true
@@ -25,18 +25,22 @@ subtle = { version = "^2.4", default-features = false }
rand_core = { version = "0.6", default-features = false }
digest = { version = "0.10", default-features = false }
sha2 = { version = "0.10", default-features = false }
ff = { version = "0.13", default-features = false, features = ["bits"] }
group = { version = "0.13", default-features = false }
ciphersuite = { path = "../ciphersuite", default-features = false }
crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] }
curve25519-dalek = { version = ">= 4.0, < 4.2", default-features = false, features = ["alloc", "zeroize", "digest", "group", "precomputed-tables"] }
[dev-dependencies]
hex = "0.4"
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { path = "../ff-group-tests" }
[features]
std = ["zeroize/std", "subtle/std", "rand_core/std", "digest/std"]
alloc = ["zeroize/alloc", "ciphersuite/alloc"]
std = ["alloc", "zeroize/std", "subtle/std", "rand_core/std", "digest/std", "sha2/std", "ciphersuite/std"]
default = ["std"]

View File

@@ -3,9 +3,9 @@ use zeroize::Zeroize;
use sha2::{Digest, Sha512};
use group::Group;
use dalek_ff_group::Scalar;
use crate::Scalar;
use crate::Ciphersuite;
use ciphersuite::Ciphersuite;
macro_rules! dalek_curve {
(
@@ -15,7 +15,7 @@ macro_rules! dalek_curve {
$Point: ident,
$ID: literal
) => {
use dalek_ff_group::$Point;
use crate::$Point;
impl Ciphersuite for $Ciphersuite {
type F = Scalar;
@@ -40,12 +40,9 @@ macro_rules! dalek_curve {
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[cfg(any(test, feature = "ristretto"))]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ristretto;
#[cfg(any(test, feature = "ristretto"))]
dalek_curve!("ristretto", Ristretto, RistrettoPoint, b"ristretto");
#[cfg(any(test, feature = "ristretto"))]
#[test]
fn test_ristretto() {
ff_group_tests::group::test_prime_group_bits::<_, RistrettoPoint>(&mut rand_core::OsRng);
@@ -71,12 +68,9 @@ fn test_ristretto() {
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[cfg(feature = "ed25519")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ed25519;
#[cfg(feature = "ed25519")]
dalek_curve!("ed25519", Ed25519, EdwardsPoint, b"edwards25519");
#[cfg(feature = "ed25519")]
#[test]
fn test_ed25519() {
ff_group_tests::group::test_prime_group_bits::<_, EdwardsPoint>(&mut rand_core::OsRng);

View File

@@ -17,7 +17,7 @@ use crypto_bigint::{
impl_modulus,
};
use group::ff::{Field, PrimeField, FieldBits, PrimeFieldBits};
use group::ff::{Field, PrimeField, FieldBits, PrimeFieldBits, FromUniformBytes};
use crate::{u8_from_bool, constant_time, math_op, math};
@@ -35,7 +35,8 @@ impl_modulus!(
type ResidueType = Residue<FieldModulus, { FieldModulus::LIMBS }>;
/// A constant-time implementation of the Ed25519 field.
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Default, Debug, Zeroize)]
#[repr(transparent)]
pub struct FieldElement(ResidueType);
// Square root of -1.
@@ -92,7 +93,7 @@ impl Neg for FieldElement {
}
}
impl<'a> Neg for &'a FieldElement {
impl Neg for &FieldElement {
type Output = FieldElement;
fn neg(self) -> Self::Output {
(*self).neg()
@@ -216,10 +217,18 @@ impl PrimeFieldBits for FieldElement {
}
impl FieldElement {
/// Interpret the value as a little-endian integer, square it, and reduce it into a FieldElement.
pub fn from_square(value: [u8; 32]) -> FieldElement {
let value = U256::from_le_bytes(value);
FieldElement(reduce(U512::from(value.mul_wide(&value))))
/// Create a FieldElement from a `crypto_bigint::U256`.
///
/// This will reduce the `U256` by the modulus, into a member of the field.
pub const fn from_u256(u256: &U256) -> Self {
FieldElement(Residue::new(u256))
}
/// Create a `FieldElement` from the reduction of a 512-bit number.
///
/// The bytes are interpreted in little-endian format.
pub fn wide_reduce(value: [u8; 64]) -> Self {
FieldElement(reduce(U512::from_le_bytes(value)))
}
/// Perform an exponentiation.
@@ -244,7 +253,16 @@ impl FieldElement {
res *= res;
}
}
res *= table[usize::from(bits)];
let mut scale_by = FieldElement::ONE;
#[allow(clippy::needless_range_loop)]
for i in 0 .. 16 {
#[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16
{
scale_by = <_>::conditional_select(&scale_by, &table[i], bits.ct_eq(&(i as u8)));
}
}
res *= scale_by;
bits = 0;
}
}
@@ -288,6 +306,12 @@ impl FieldElement {
}
}
impl FromUniformBytes<64> for FieldElement {
fn from_uniform_bytes(bytes: &[u8; 64]) -> Self {
Self::wide_reduce(*bytes)
}
}
impl Sum<FieldElement> for FieldElement {
fn sum<I: Iterator<Item = FieldElement>>(iter: I) -> FieldElement {
let mut res = FieldElement::ZERO;

View File

@@ -1,5 +1,5 @@
#![allow(deprecated)]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![no_std] // Prevents writing new code, in what should be a simple wrapper, which requires std
#![doc = include_str!("../README.md")]
#![allow(clippy::redundant_closure_call)]
@@ -30,7 +30,7 @@ use dalek::{
pub use constants::{ED25519_BASEPOINT_TABLE, RISTRETTO_BASEPOINT_TABLE};
use group::{
ff::{Field, PrimeField, FieldBits, PrimeFieldBits},
ff::{Field, PrimeField, FieldBits, PrimeFieldBits, FromUniformBytes},
Group, GroupEncoding,
prime::PrimeGroup,
};
@@ -38,13 +38,24 @@ use group::{
mod field;
pub use field::FieldElement;
mod ciphersuite;
pub use crate::ciphersuite::{Ed25519, Ristretto};
// Use black_box when possible
#[rustversion::since(1.66)]
use core::hint::black_box;
#[rustversion::before(1.66)]
fn black_box<T>(val: T) -> T {
val
mod black_box {
pub(crate) fn black_box<T>(val: T) -> T {
#[allow(clippy::incompatible_msrv)]
core::hint::black_box(val)
}
}
#[rustversion::before(1.66)]
mod black_box {
pub(crate) fn black_box<T>(val: T) -> T {
val
}
}
use black_box::black_box;
fn u8_from_bool(bit_ref: &mut bool) -> u8 {
let bit_ref = black_box(bit_ref);
@@ -208,7 +219,16 @@ impl Scalar {
res *= res;
}
}
res *= table[usize::from(bits)];
let mut scale_by = Scalar::ONE;
#[allow(clippy::needless_range_loop)]
for i in 0 .. 16 {
#[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16
{
scale_by = <_>::conditional_select(&scale_by, &table[i], bits.ct_eq(&(i as u8)));
}
}
res *= scale_by;
bits = 0;
}
}
@@ -305,6 +325,12 @@ impl PrimeFieldBits for Scalar {
}
}
impl FromUniformBytes<64> for Scalar {
fn from_uniform_bytes(bytes: &[u8; 64]) -> Self {
Self::from_bytes_mod_order_wide(bytes)
}
}
impl Sum<Scalar> for Scalar {
fn sum<I: Iterator<Item = Scalar>>(iter: I) -> Scalar {
Self(DScalar::sum(iter))
@@ -342,7 +368,12 @@ macro_rules! dalek_group {
$BASEPOINT_POINT: ident,
$BASEPOINT_TABLE: ident
) => {
/// Wrapper around the dalek Point type. For Ed25519, this is restricted to the prime subgroup.
/// Wrapper around the dalek Point type.
///
/// All operations will be restricted to a prime-order subgroup (equivalent to the group itself
/// in the case of Ristretto). The exposure of the internal element does allow bypassing this
/// however, which may lead to undefined/computationally-unsafe behavior, and is entirely at
/// the user's risk.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct $Point(pub $DPoint);
deref_borrow!($Point, $DPoint);

View File

@@ -1,13 +1,13 @@
[package]
name = "dkg"
version = "0.5.1"
version = "0.6.1"
description = "Distributed key generation over ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.79"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
@@ -17,84 +17,25 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive", "alloc"] }
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
thiserror = { version = "2", default-features = false }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false }
borsh = { version = "1", default-features = false, features = ["derive", "de_strict_order"], optional = true }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false, features = ["recommended"] }
chacha20 = { version = "0.9", default-features = false, features = ["zeroize"] }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false }
multiexp = { path = "../multiexp", version = "0.4", default-features = false }
schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false }
dleq = { path = "../dleq", version = "^0.4.1", default-features = false }
# eVRF DKG dependencies
subtle = { version = "2", default-features = false, features = ["std"], optional = true }
generic-array = { version = "1", default-features = false, features = ["alloc"], optional = true }
blake2 = { version = "0.10", default-features = false, features = ["std"], optional = true }
rand_chacha = { version = "0.3", default-features = false, features = ["std"], optional = true }
generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", default-features = false, optional = true }
ec-divisors = { path = "../evrf/divisors", default-features = false, optional = true }
generalized-bulletproofs-circuit-abstraction = { path = "../evrf/circuit-abstraction", optional = true }
generalized-bulletproofs-ec-gadgets = { path = "../evrf/ec-gadgets", optional = true }
secq256k1 = { path = "../evrf/secq256k1", optional = true }
embedwards25519 = { path = "../evrf/embedwards25519", optional = true }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
rand = { version = "0.8", default-features = false, features = ["std"] }
ciphersuite = { path = "../ciphersuite", default-features = false, features = ["ristretto"] }
generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", features = ["tests"] }
ec-divisors = { path = "../evrf/divisors", features = ["pasta"] }
pasta_curves = "0.5"
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] }
[features]
std = [
"thiserror",
"rand_core/std",
"thiserror/std",
"std-shims/std",
"borsh?/std",
"transcript/std",
"chacha20/std",
"ciphersuite/std",
"multiexp/std",
"multiexp/batch",
"schnorr/std",
"dleq/std",
"dleq/serialize"
]
borsh = ["dep:borsh"]
evrf = [
"std",
"dep:subtle",
"dep:generic-array",
"dep:blake2",
"dep:rand_chacha",
"dep:generalized-bulletproofs",
"dep:ec-divisors",
"dep:generalized-bulletproofs-circuit-abstraction",
"dep:generalized-bulletproofs-ec-gadgets",
]
evrf-secp256k1 = ["evrf", "ciphersuite/secp256k1", "secq256k1"]
evrf-ed25519 = ["evrf", "ciphersuite/ed25519", "embedwards25519"]
evrf-ristretto = ["evrf", "ciphersuite/ristretto", "embedwards25519"]
tests = ["rand_core/getrandom"]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2021-2023 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,16 +1,15 @@
# Distributed Key Generation
A collection of implementations of various distributed key generation protocols.
A crate implementing a type for keys, presumably the result of a distributed
key generation protocol, and utilities from there.
All included protocols resolve into the provided `Threshold` types, intended to
enable their modularity. Additional utilities around these types, such as
promotion from one generator to another, are also provided.
This crate used to host implementations of distributed key generation protocols
as well (hence the name). Those have been smashed into their own crates, such
as [`dkg-musig`](https://docs.rs/dkg-musig) and
[`dkg-pedpop`](https://docs.rs/dkg-pedpop).
Currently, the only included protocol is the two-round protocol from the
[FROST paper](https://eprint.iacr.org/2020/852).
This library was
[audited by Cypher Stack in March 2023](https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf),
culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06).
Any subsequent changes have not undergone auditing.
Before being smashed, this crate was [audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit [669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -0,0 +1,36 @@
[package]
name = "dkg-dealer"
version = "0.6.0"
description = "Produce dkg::ThresholdKeys with a dealer key generation"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/dealer"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.66"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
zeroize = { version = "^1.5", default-features = false }
rand_core = { version = "0.6", default-features = false }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }
dkg = { path = "../", version = "0.6", default-features = false }
[features]
std = [
"zeroize/std",
"rand_core/std",
"std-shims/std",
"ciphersuite/std",
"dkg/std",
]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,13 @@
# Distributed Key Generation - Dealer
This crate implements a dealer key generation protocol for the
[`dkg`](https://docs.rs/dkg) crate's types. This provides a single point of
failure when the key is being generated and is NOT recommended for use outside
of tests.
This crate was originally part of (in some form) the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit [669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -0,0 +1,68 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![no_std]
use core::ops::Deref;
use std_shims::{vec::Vec, collections::HashMap};
use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, CryptoRng};
use ciphersuite::{
group::ff::{Field, PrimeField},
Ciphersuite,
};
pub use dkg::*;
/// Create a key via a dealer key generation protocol.
pub fn key_gen<R: RngCore + CryptoRng, C: Ciphersuite>(
rng: &mut R,
threshold: u16,
participants: u16,
) -> Result<HashMap<Participant, ThresholdKeys<C>>, DkgError> {
let mut coefficients = Vec::with_capacity(usize::from(participants));
// `.max(1)` so we always generate the 0th coefficient which we'll share
for _ in 0 .. threshold.max(1) {
coefficients.push(Zeroizing::new(C::F::random(&mut *rng)));
}
fn polynomial<F: PrimeField + Zeroize>(
coefficients: &[Zeroizing<F>],
l: Participant,
) -> Zeroizing<F> {
let l = F::from(u64::from(u16::from(l)));
// This should never be reached since Participant is explicitly non-zero
assert!(l != F::ZERO, "zero participant passed to polynomial");
let mut share = Zeroizing::new(F::ZERO);
for (idx, coefficient) in coefficients.iter().rev().enumerate() {
*share += coefficient.deref();
if idx != (coefficients.len() - 1) {
*share *= l;
}
}
share
}
let group_key = C::generator() * coefficients[0].deref();
let mut secret_shares = HashMap::with_capacity(participants as usize);
let mut verification_shares = HashMap::with_capacity(participants as usize);
for i in 1 ..= participants {
let i = Participant::new(i).expect("non-zero u16 wasn't a valid Participant index");
let secret_share = polynomial(&coefficients, i);
secret_shares.insert(i, secret_share.clone());
verification_shares.insert(i, C::generator() * *secret_share);
}
let mut res = HashMap::with_capacity(participants as usize);
for (i, secret_share) in secret_shares {
let keys = ThresholdKeys::new(
ThresholdParams::new(threshold, participants, i)?,
Interpolation::Lagrange,
secret_share,
verification_shares.clone(),
)?;
debug_assert_eq!(keys.group_key(), group_key);
res.insert(i, keys);
}
Ok(res)
}

View File

@@ -0,0 +1,49 @@
[package]
name = "dkg-musig"
version = "0.6.0"
description = "The MuSig key aggregation protocol"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/musig"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.79"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "2", default-features = false }
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false }
multiexp = { path = "../../multiexp", version = "0.4", default-features = false }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false }
dkg = { path = "../", version = "0.6", default-features = false }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
dalek-ff-group = { path = "../../dalek-ff-group" }
dkg-recovery = { path = "../recovery", default-features = false, features = ["std"] }
[features]
std = [
"thiserror/std",
"rand_core/std",
"std-shims/std",
"multiexp/std",
"ciphersuite/std",
"dkg/std",
]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2021-2024 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,12 @@
# Distributed Key Generation - MuSig
This implements the MuSig key aggregation protocol for the
[`dkg`](https://docs.rs/dkg) crate's types.
This crate was originally part of (in some form) the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

162
crypto/dkg/musig/src/lib.rs Normal file
View File

@@ -0,0 +1,162 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
use core::ops::Deref;
use std_shims::{
vec,
vec::Vec,
collections::{HashSet, HashMap},
};
use zeroize::Zeroizing;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
pub use dkg::*;
#[cfg(test)]
mod tests;
/// Errors encountered when working with threshold keys.
#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)]
pub enum MusigError<C: Ciphersuite> {
/// No keys were provided.
#[error("no keys provided")]
NoKeysProvided,
/// Too many keys were provided.
#[error("too many keys (allowed {max}, provided {provided})")]
TooManyKeysProvided {
/// The maximum amount of keys allowed.
max: u16,
/// The amount of keys provided.
provided: usize,
},
/// A participant was duplicated.
#[error("a participant was duplicated")]
DuplicatedParticipant(C::G),
/// Participating, yet our public key wasn't found in the list of keys.
#[error("private key's public key wasn't present in the list of public keys")]
NotPresent,
/// An error propagated from the underlying `dkg` crate.
#[error("error from dkg ({0})")]
DkgError(DkgError),
}
fn check_keys<C: Ciphersuite>(keys: &[C::G]) -> Result<u16, MusigError<C>> {
if keys.is_empty() {
Err(MusigError::NoKeysProvided)?;
}
let keys_len = u16::try_from(keys.len())
.map_err(|_| MusigError::TooManyKeysProvided { max: u16::MAX, provided: keys.len() })?;
let mut set = HashSet::with_capacity(keys.len());
for key in keys {
let bytes = key.to_bytes().as_ref().to_vec();
if !set.insert(bytes) {
Err(MusigError::DuplicatedParticipant(*key))?;
}
}
Ok(keys_len)
}
fn binding_factor_transcript<C: Ciphersuite>(
context: [u8; 32],
keys_len: u16,
keys: &[C::G],
) -> Vec<u8> {
debug_assert_eq!(usize::from(keys_len), keys.len());
let mut transcript = vec![];
transcript.extend(&context);
transcript.extend(keys_len.to_le_bytes());
for key in keys {
transcript.extend(key.to_bytes().as_ref());
}
transcript
}
fn binding_factor<C: Ciphersuite>(mut transcript: Vec<u8>, i: u16) -> C::F {
transcript.extend(i.to_le_bytes());
C::hash_to_F(b"dkg-musig", &transcript)
}
#[allow(clippy::type_complexity)]
fn musig_key_multiexp<C: Ciphersuite>(
context: [u8; 32],
keys: &[C::G],
) -> Result<Vec<(C::F, C::G)>, MusigError<C>> {
let keys_len = check_keys::<C>(keys)?;
let transcript = binding_factor_transcript::<C>(context, keys_len, keys);
let mut multiexp = Vec::with_capacity(keys.len());
for i in 1 ..= keys_len {
multiexp.push((binding_factor::<C>(transcript.clone(), i), keys[usize::from(i - 1)]));
}
Ok(multiexp)
}
/// The group key resulting from using this library's MuSig key aggregation.
///
/// This function executes in variable time and MUST NOT be used with secret data.
pub fn musig_key_vartime<C: Ciphersuite>(
context: [u8; 32],
keys: &[C::G],
) -> Result<C::G, MusigError<C>> {
Ok(multiexp::multiexp_vartime(&musig_key_multiexp(context, keys)?))
}
/// The group key resulting from using this library's MuSig key aggregation.
pub fn musig_key<C: Ciphersuite>(context: [u8; 32], keys: &[C::G]) -> Result<C::G, MusigError<C>> {
Ok(multiexp::multiexp(&musig_key_multiexp(context, keys)?))
}
/// A n-of-n non-interactive DKG which does not guarantee the usability of the resulting key.
pub fn musig<C: Ciphersuite>(
context: [u8; 32],
private_key: Zeroizing<C::F>,
keys: &[C::G],
) -> Result<ThresholdKeys<C>, MusigError<C>> {
let our_pub_key = C::generator() * private_key.deref();
let Some(our_i) = keys.iter().position(|key| *key == our_pub_key) else {
Err(MusigError::DkgError(DkgError::NotParticipating))?
};
let keys_len: u16 = check_keys::<C>(keys)?;
let params = ThresholdParams::new(
keys_len,
keys_len,
// The `+ 1` won't fail as `keys.len() <= u16::MAX`, so any index is `< u16::MAX`
Participant::new(
u16::try_from(our_i).expect("keys.len() <= u16::MAX yet index of keys > u16::MAX?") + 1,
)
.expect("i + 1 != 0"),
)
.map_err(MusigError::DkgError)?;
let transcript = binding_factor_transcript::<C>(context, keys_len, keys);
let mut binding_factors = Vec::with_capacity(keys.len());
let mut multiexp = Vec::with_capacity(keys.len());
let mut verification_shares = HashMap::with_capacity(keys.len());
for (i, key) in (1 ..= keys_len).zip(keys.iter().copied()) {
let binding_factor = binding_factor::<C>(transcript.clone(), i);
binding_factors.push(binding_factor);
multiexp.push((binding_factor, key));
let i = Participant::new(i).expect("non-zero u16 wasn't a valid Participant index?");
verification_shares.insert(i, key);
}
let group_key = multiexp::multiexp(&multiexp);
debug_assert_eq!(our_pub_key, verification_shares[&params.i()]);
debug_assert_eq!(musig_key_vartime::<C>(context, keys), Ok(group_key));
ThresholdKeys::new(
params,
Interpolation::Constant(binding_factors),
private_key,
verification_shares,
)
.map_err(MusigError::DkgError)
}

View File

@@ -0,0 +1,71 @@
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use dalek_ff_group::Ristretto;
use ciphersuite::{group::ff::Field, Ciphersuite};
use dkg_recovery::recover_key;
use crate::*;
/// Tests MuSig key generation.
#[test]
pub fn test_musig() {
const PARTICIPANTS: u16 = 5;
let mut keys = vec![];
let mut pub_keys = vec![];
for _ in 0 .. PARTICIPANTS {
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
pub_keys.push(<Ristretto as Ciphersuite>::generator() * *key);
keys.push(key);
}
const CONTEXT: [u8; 32] = *b"MuSig Test ";
// Empty signing set
musig::<Ristretto>(CONTEXT, Zeroizing::new(<Ristretto as Ciphersuite>::F::ZERO), &[])
.unwrap_err();
// Signing set we're not part of
musig::<Ristretto>(
CONTEXT,
Zeroizing::new(<Ristretto as Ciphersuite>::F::ZERO),
&[<Ristretto as Ciphersuite>::generator()],
)
.unwrap_err();
// Test with n keys
{
let mut created_keys = HashMap::new();
let mut verification_shares = HashMap::new();
let group_key = musig_key::<Ristretto>(CONTEXT, &pub_keys).unwrap();
for (i, key) in keys.iter().enumerate() {
let these_keys = musig::<Ristretto>(CONTEXT, key.clone(), &pub_keys).unwrap();
assert_eq!(these_keys.params().t(), PARTICIPANTS);
assert_eq!(these_keys.params().n(), PARTICIPANTS);
assert_eq!(usize::from(u16::from(these_keys.params().i())), i + 1);
verification_shares.insert(
these_keys.params().i(),
<Ristretto as Ciphersuite>::generator() * **these_keys.original_secret_share(),
);
assert_eq!(these_keys.group_key(), group_key);
created_keys.insert(these_keys.params().i(), these_keys);
}
for keys in created_keys.values() {
for (l, verification_share) in &verification_shares {
assert_eq!(keys.original_verification_share(*l), *verification_share);
}
}
assert_eq!(
<Ristretto as Ciphersuite>::generator() *
*recover_key(&created_keys.values().cloned().collect::<Vec<_>>()).unwrap(),
group_key
);
}
}

View File

@@ -0,0 +1,37 @@
[package]
name = "dkg-pedpop"
version = "0.6.0"
description = "The PedPoP distributed key generation protocol"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg/pedpop"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["dkg", "multisig", "threshold", "ff", "group"]
edition = "2021"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "2", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../../transcript", version = "^0.3.3", default-features = false, features = ["std", "recommended"] }
chacha20 = { version = "0.9", default-features = false, features = ["std", "zeroize"] }
multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["std"] }
ciphersuite = { path = "../../ciphersuite", version = "^0.4.1", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../schnorr", version = "^0.5.1", default-features = false, features = ["std"] }
dleq = { path = "../../dleq", version = "^0.4.1", default-features = false, features = ["std", "serialize"] }
dkg = { path = "../", version = "0.6", default-features = false, features = ["std"] }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["getrandom"] }
dalek-ff-group = { path = "../../dalek-ff-group", default-features = false }

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2021-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,12 @@
# Distributed Key Generation - PedPoP
This implements the PedPoP distributed key generation protocol for the
[`dkg`](https://docs.rs/dkg) crate's types.
This crate was originally part of the `dkg` crate, which was
[audited by Cypher Stack in March 2023](
https://github.com/serai-dex/serai/raw/e1bb2c191b7123fd260d008e31656d090d559d21/audits/Cypher%20Stack%20crypto%20March%202023/Audit.pdf
), culminating in commit
[669d2dbffc1dafb82a09d9419ea182667115df06](
https://github.com/serai-dex/serai/tree/669d2dbffc1dafb82a09d9419ea182667115df06
). Any subsequent changes have not undergone auditing.

View File

@@ -21,7 +21,7 @@ use multiexp::BatchVerifier;
use schnorr::SchnorrSignature;
use dleq::DLEqProof;
use crate::{Participant, ThresholdParams};
use dkg::{Participant, ThresholdParams};
mod sealed {
use super::*;
@@ -48,8 +48,8 @@ pub(crate) use sealed::*;
/// Wraps a message with a key to use for encryption in the future.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct EncryptionKeyMessage<C: Ciphersuite, M: Message> {
pub(crate) msg: M,
pub(crate) enc_key: C::G,
msg: M,
enc_key: C::G,
}
// Doesn't impl ReadWrite so that doesn't need to be imported
@@ -69,7 +69,7 @@ impl<C: Ciphersuite, M: Message> EncryptionKeyMessage<C, M> {
buf
}
#[cfg(any(test, feature = "tests"))]
#[cfg(test)]
pub(crate) fn enc_key(&self) -> C::G {
self.enc_key
}
@@ -348,12 +348,17 @@ impl<C: Ciphersuite> Decryption<C> {
pub(crate) fn new(context: [u8; 32]) -> Self {
Self { context, enc_keys: HashMap::new() }
}
pub(crate) fn register(&mut self, participant: Participant, key: C::G) {
pub(crate) fn register<M: Message>(
&mut self,
participant: Participant,
msg: EncryptionKeyMessage<C, M>,
) -> M {
assert!(
!self.enc_keys.contains_key(&participant),
"Re-registering encryption key for a participant"
);
self.enc_keys.insert(participant, key);
self.enc_keys.insert(participant, msg.enc_key);
msg.msg
}
// Given a message, and the intended decryptor, and a proof for its key, decrypt the message.
@@ -425,7 +430,12 @@ impl<C: Ciphersuite> Zeroize for Encryption<C> {
}
impl<C: Ciphersuite> Encryption<C> {
pub(crate) fn new(context: [u8; 32], i: Participant, enc_key: Zeroizing<C::F>) -> Self {
pub(crate) fn new<R: RngCore + CryptoRng>(
context: [u8; 32],
i: Participant,
rng: &mut R,
) -> Self {
let enc_key = Zeroizing::new(C::random_nonzero_F(rng));
Self {
context,
i,
@@ -439,8 +449,12 @@ impl<C: Ciphersuite> Encryption<C> {
EncryptionKeyMessage { msg, enc_key: self.enc_pub_key }
}
pub(crate) fn register(&mut self, participant: Participant, key: C::G) {
self.decryption.register(participant, key)
pub(crate) fn register<M: Message>(
&mut self,
participant: Participant,
msg: EncryptionKeyMessage<C, M>,
) -> M {
self.decryption.register(participant, msg)
}
pub(crate) fn encrypt<R: RngCore + CryptoRng, E: Encryptable>(

Some files were not shown because too many files have changed in this diff Show More