590 Commits

Author SHA1 Message Date
Luke Parker
9a75f92864 Thoroughly update versions and methodology
For hash-pinned dependencies, adds comments documenting the associated
versions.

Adds a pin to `slither-analyzer` which was prior missing.

Updates to Monero 0.18.4.4.

`mimalloc` now has the correct option set when building for `musl`. A C++
compiler is no longer required in its Docker image.

The runtime's `Dockerfile` now symlinks a `libc.so` already present on the
image instead of creating one itself. It also builds the runtime within the
image to ensure it only happens once. The test to ensure the methodology is
reproducible has been updated to not simply create containers from the image,
yet rebuild the image entirely, accordingly. This also is more robust and
arguably should have already been done.

The pin to the exact hash of the `patch-polkadot-sdk` repo in every
`Cargo.toml` has been removed. The lockfile already serves that role,
simplifying updating in the future.

The latest Rust nightly is adopted as well (superseding
https://github.com/serai-dex/serai/pull/697).

The `librocksdb-sys` patch is replaced with a `kvdb-rocksdb` patch, removing a
git dependency, thanks to https://github.com/paritytech/parity-common/pull/950.
2025-12-01 18:17:01 -05:00
Luke Parker
30ea9d9a06 Tidy the DEX pallet 2025-11-30 21:42:27 -05:00
Luke Parker
c45c973ca1 Remove musl-dev from runtime/Dockerfile
It wasn't pinned with a hash yet with a version tag. This ensures we are
deterministic to the image (specified by hash), `Cargo.lock`, and source code
alone.

Unfortunately, this was incredibly annoying to do, the exact process uncovering
a SIGSEGV in stable Rust. The extensive documentation details the solution.
Thankfully, it works now.
2025-11-27 03:37:37 -05:00
Luke Parker
6e37ac030d Add patch for alloy-eip2124 to an empty crate
Removes the `crc` dependency which had a unique author associated.
2025-11-26 17:01:03 -05:00
Luke Parker
e7c759c468 Improve substrate-median tests
The use of a dedicated test module ensures the API doesn't hide anything which
needs to be public. There's also now explicit tests for when the median is the
popped value.
2025-11-25 23:46:12 -05:00
Luke Parker
8ec0582237 Add module to calculate medians 2025-11-25 22:39:52 -05:00
Luke Parker
8d8e8a7a77 Remove unnecessary MSRVs from patches/ 2025-11-25 17:05:30 -05:00
Luke Parker
028ec3cce0 borsh 1.6.0
Bumps th MSRV for some of our crates, which is fine.
2025-11-25 16:58:19 -05:00
Luke Parker
c49215805f Update Substrate 2025-11-25 00:06:54 -05:00
Luke Parker
2ffdd2a01d Update monero-oxide, Substrate 2025-11-22 11:49:25 -05:00
Luke Parker
e1e6e67d4a Ensure desired pruning behavior is held within the node 2025-11-18 21:46:58 -05:00
Luke Parker
6b19780c7b Remove historical state access from Serai
Resolves https://github.com/serai-dex/serai/issues/694.
2025-11-18 21:27:47 -05:00
Luke Parker
6100c3ca90 Restore patches/dalek-ff-group
Ensures `crypto/dalek-ff-group` is pure.
2025-11-16 19:04:57 -05:00
Luke Parker
fa0ed4b180 Add validator sets RPC functions necessary for the coordinator 2025-11-16 17:38:08 -05:00
Luke Parker
0ea16f9e01 doc_auto_cfg -> doc_cfg 2025-11-16 17:38:08 -05:00
Luke Parker
7a314baa9f Update all of serai-coordinator to compile with the new serai-client-serai 2025-11-16 17:38:03 -05:00
Luke Parker
9891ccade8 Add From<*::Call> for Call to serai-abi 2025-11-16 16:43:06 -05:00
Luke Parker
f1f166c168 Restore publish_transaction RPC to Serai 2025-11-16 16:43:06 -05:00
Luke Parker
df4aee2d59 Update serai-client to solely be an umbrella crate of the dedicated client libraries 2025-11-16 16:43:05 -05:00
Luke Parker
302a43653f Add helper to get TemporalSerai as of the latest finalized block 2025-11-16 16:42:36 -05:00
Luke Parker
d219b77bd0 Add bindings to the events from the coins module to serai-client-serai 2025-11-15 16:13:25 -05:00
Luke Parker
fce26eaee1 Update the block RPCs to return null when missing, not an error
Promotes clarity.
2025-11-15 16:12:39 -05:00
Luke Parker
3cfbd9add7 Update serai-coordinator-cosign to the new serai-client-serai 2025-11-15 16:10:58 -05:00
Luke Parker
609cf06393 Update SetDecided to include the validators
Necessary for light-client protocols to follow along with consensus. Arguably,
anyone handling GRANDPA's consensus could peek at this through the consensus
commit anyways, but we shouldn't so defer that.
2025-11-15 14:59:33 -05:00
Luke Parker
46b1f1b7ec Add test for the integrity of headers 2025-11-14 12:04:21 -05:00
Luke Parker
09113201e7 Fixes to the validator sets RPC 2025-11-14 11:19:02 -05:00
Luke Parker
556d294157 Add pallet-timestamp
Ensures the timestamp is sent, within expected parameters, and the correctness
in relation to `pallet-babe`.
2025-11-14 09:59:32 -05:00
Luke Parker
82ca889ed3 Wrap Proposer so we can add the SeraiPreExecutionDigest (timestamps) 2025-11-14 08:02:54 -05:00
Luke Parker
cde0f753c2 Correct Serai header on genesis
Includes a couple misc fixes for the RPC as well.
2025-11-14 07:22:59 -05:00
Luke Parker
6ff0ef7aa6 Type the errors yielded by serai-node's RPC 2025-11-14 03:52:35 -05:00
Luke Parker
f9e3d1b142 Expand validator sets API with the rest of the events and some getters
We could've added a storage API, and fetched fields that way, except we want
the storage to be opaque. That meant we needed to add the RPC routes to the
node, which also simplifies other people writing RPC code and fetching these
fields. Then the node could've used the storage API, except a lot of the
storage in validator-sets is marked opaque and to only be read via functions,
so extending the runtime made the most sense.
2025-11-14 03:37:06 -05:00
Luke Parker
a793aa18ef Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-13 05:48:18 -05:00
Luke Parker
5662beeb8a Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-13 05:25:41 -05:00
Luke Parker
509bd58f4e Add method to fetch a block's events to the RPC 2025-11-13 05:13:55 -05:00
Luke Parker
367a5769e8 Update deny.toml 2025-11-13 00:37:51 -05:00
Luke Parker
cb6eb6430a Update version of substrate 2025-11-13 00:17:19 -05:00
Luke Parker
4f82e5912c Use Alpine to build the runtime
Smaller, works without issue.
2025-11-12 23:02:22 -05:00
Luke Parker
ac7af40f2e Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-12 23:00:57 -05:00
Luke Parker
264bdd46ca Update serai-runtime to compile a minimum subset of itself for non-WASM targets
We only really care about it as a WASM blob, given `serai-abi`, so there's no
need to compile it twice when it's an expensive blob and we don't care about it
at all.
2025-11-12 23:00:00 -05:00
Luke Parker
c52f7634de Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-11 23:04:05 -05:00
Luke Parker
21eaa5793d Error when not running the dev network if an explicit WASM runtime isn't provided 2025-11-11 23:02:20 -05:00
Luke Parker
c744a80d80 Check the finalized function doesn't claim unfinalized blocks are in fact finalized 2025-11-11 23:02:20 -05:00
Luke Parker
a34f9f6164 Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-11 23:02:20 -05:00
Luke Parker
353683cfd2 revm 33 2025-11-11 23:02:16 -05:00
Luke Parker
d4f77159c4 Rust 1.91.1 due to the regression re: wasm builds 2025-11-11 09:08:30 -05:00
Luke Parker
191bf4bdea Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-10 06:34:33 -05:00
Luke Parker
06a4824aba Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-10 05:31:13 -05:00
Luke Parker
e65a37e639 Update various versions 2025-11-10 04:02:02 -05:00
Luke Parker
4653ef4a61 Use dockertest for the newly added serai-client-serai test 2025-11-07 02:08:02 -05:00
Luke Parker
ce08fad931 Add initial basic tests for serai-client-serai 2025-11-06 20:12:37 -05:00
Luke Parker
1866bb7ae3 Begin work on the new RPC for the new node 2025-11-06 03:08:43 -05:00
Luke Parker
aff2065c31 Polkadot stable2509-1 2025-11-06 00:23:35 -05:00
Luke Parker
7300700108 Update misc versions 2025-11-05 19:11:33 -05:00
Luke Parker
31874ceeae serai-node which compiles and produces/finalizes blocks with --dev 2025-11-05 18:20:23 -05:00
Luke Parker
012b8fddae Get serai-node to compile again 2025-11-05 01:18:21 -05:00
Luke Parker
d2f58232c8 Tweak serai-coordinator-cosign to make it closer to compiling again
Adds `PartialOrd, Ord` derivations to some items in `serai-primitives` so they
may be used as keys within `borsh` maps.
2025-11-04 19:42:23 -05:00
Luke Parker
49794b6a75 Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-04 19:28:26 -05:00
Luke Parker
973287d0a1 Smash serai-client so the processors don't need the entire lib to access their specific code
We prior controlled this with feature flags. It's just better to define their
own crates.
2025-11-04 19:27:53 -05:00
Luke Parker
1b499edfe1 Misc fixes so this compiles 2025-11-04 18:56:56 -05:00
Luke Parker
642848bd24 Bump revm 2025-11-04 13:31:46 -05:00
Luke Parker
f7fb78bdd6 Merge branch 'next' into next-polkadot-sdk 2025-11-04 13:24:00 -05:00
Luke Parker
9c47ef2658 Restore deny exception for kayabaNerve/elliptic-curves, accidentally dropped when merging develop 2025-11-04 13:18:59 -05:00
Luke Parker
e1b6b638c6 Merge branch 'develop' into next 2025-11-04 13:14:38 -05:00
Luke Parker
65613750e1 Merge branch 'next' into next-polkadot-sdk 2025-11-04 12:06:13 -05:00
Luke Parker
87ee879dea doc_auto_cfg -> doc_cfg 2025-11-04 10:20:17 -05:00
Luke Parker
b5603560e8 Merge branch 'develop' into next 2025-11-04 10:19:38 -05:00
Luke Parker
94faf098b6 Update nightly version 2025-10-05 18:44:04 -04:00
Luke Parker
03e45f73cd Merge branch 'develop' into next 2025-10-05 18:43:53 -04:00
Luke Parker
56f6ba2dac Merge branch 'develop' into next-polkadot-sdk 2025-10-05 06:04:17 -04:00
Luke Parker
138a0e9b40 Resolve https://github.com/serai-dex/serai/issues/680 2025-09-30 01:05:17 -04:00
Luke Parker
4fc7263ac3 Make simple_request::Client generic to the executor
Part of https://github.com/serai-dex/serai/issues/682.

We don't remove the use of `tokio::sync::Mutex` now as `hyper` pulls in
`tokio::sync` anyways, so there's no point in replacing it. This doesn't yet
solve TLS for non-`tokio` `Client`s.
2025-09-30 01:05:12 -04:00
Luke Parker
f27fd59fa6 Update documentation within modular-frost
Resolves https://github.com/serai-dex/serai/issues/675.
2025-09-30 00:27:29 -04:00
Luke Parker
08f6af8bb9 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-09-27 02:07:18 -04:00
Luke Parker
3512b3832d Don't actually define a pallet for pallet-session
We need its `Config`, not it as a pallet. As it has no storage, no calls, no
events, this is fine.
2025-09-26 23:08:38 -04:00
Luke Parker
1164f92ea1 Update usage of now-associated const in processor/key-gen 2025-09-26 22:48:52 -04:00
Luke Parker
0a3ead0e19 Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-09-26 22:47:47 -04:00
Luke Parker
437f0e9a93 Remove serdect by removing the unnecessary "alloc" feature from crypto-bigint
This only works for the legacy `crypto-bigint` and downstream consumers who
don't have the modern `serdect` pulled in for independent reasons.
2025-09-26 20:58:45 -04:00
Luke Parker
cc5d38f1ce dkg-evrf Security Proofs (#681)
* Add audit statement for `dkg-evrf`

This doesn't cover the implementation, solely the academia and background.

Also moves the existing audit of the `crypto` folder for organizational
reasons.

* Add files via upload
2025-09-26 11:20:48 -04:00
Luke Parker
0ce025e0c2 Update build-dependencies CI action 2025-09-21 15:40:58 -04:00
Luke Parker
ea66cd0d1a Update build-dependencies CI action 2025-09-21 15:40:15 -04:00
Luke Parker
8b32fba458 Minor cargo update 2025-09-21 15:40:05 -04:00
Luke Parker
e63acf3f67 Restore a runtime which compiles
Adds BABE, GRANDPA, to the runtime definition and a few stubs for not yet
implemented interfaces.
2025-09-21 13:16:43 -04:00
Luke Parker
d373d2a4c9 Restore AllowMint to serai-validator-sets-pallet and reorganize TODOs 2025-09-20 04:45:28 -04:00
Luke Parker
cbf998ff30 Restore report_slashes
This does not yet handle the `SlashReport`. It solely handles the routing for
it.
2025-09-20 04:16:01 -04:00
Luke Parker
ef07253a27 Restore the event to serai-validator-sets-pallet 2025-09-20 03:42:24 -04:00
Luke Parker
ffae6753ec Restore the set_keys call 2025-09-20 03:04:26 -04:00
Luke Parker
a04215bc13 Remove commented-out slashing code from serai-validator-sets-pallet
Deferred to https://github.com/serai-dex/serai/issues/657.
2025-09-20 03:04:19 -04:00
Luke Parker
28aea8a442 Incorporate check a validator won't prevent ever not having a single point of failure 2025-09-20 01:58:39 -04:00
Luke Parker
7b46477ca0 Add explicit hook for deciding whether to include the genesis validators 2025-09-20 01:57:55 -04:00
Luke Parker
e62b62ddfb Restore usage of pallet-grandpa to serai-validator-sets-pallet 2025-09-20 01:36:11 -04:00
Luke Parker
a2d8d0fd13 Restore integration with pallet-babe to serai-validator-sets-pallet 2025-09-20 01:23:02 -04:00
Luke Parker
b2b36b17c4 Restore GenesisConfig to the validator sets pallet 2025-09-20 00:06:19 -04:00
Luke Parker
9de8394efa Emit events within the signals pallet 2025-09-19 22:44:29 -04:00
Luke Parker
3cb9432daa Have the coins pallet emit events via serai_core_pallet
`serai_core_pallet` solely defines an accumulator for the events. We use the
traditional `frame_system::Events` to store them for now and enable retrieval.
2025-09-19 22:18:55 -04:00
Luke Parker
3f5150b3fa Properly define the core pallet instead of placing it within the runtime 2025-09-19 19:05:47 -04:00
Luke Parker
d74b00b9e4 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:09:22 -04:00
Luke Parker
224cf4ea21 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:00:10 -04:00
Luke Parker
3955f92cc2 Merge branch 'next' into next-polkadot-sdk 2025-09-18 18:19:14 -04:00
Luke Parker
a9b1e5293c Support webpki-roots as a fallback in simple-request 2025-09-18 18:15:24 -04:00
Luke Parker
80009ab67f Tidy unused import 2025-09-18 17:49:37 -04:00
Luke Parker
df9fda2971 Fixes from errors in cherry-picked commits 2025-09-18 17:49:32 -04:00
Luke Parker
ca8afb83a1 simple-request 0.2.0 2025-09-18 17:41:31 -04:00
Luke Parker
18a9cf2535 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:41:31 -04:00
Luke Parker
10c126ad92 Misc updates 2025-09-18 17:41:25 -04:00
Luke Parker
19305aebc9 Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-18 17:06:57 -04:00
Luke Parker
be68e27551 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-18 17:06:42 -04:00
Luke Parker
d6d96fe8ff Correct std-shims feature flagging 2025-09-18 17:06:31 -04:00
Luke Parker
95909d83a4 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-18 17:06:05 -04:00
Luke Parker
3bd48974f3 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-18 17:05:19 -04:00
Luke Parker
29093715e3 Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-18 17:05:07 -04:00
Luke Parker
87b4dfc8f3 Expand std_shims::prelude to better match std::prelude 2025-09-18 17:04:54 -04:00
Luke Parker
4db78b1787 Add the ability to bound the response's size limit to simple-request 2025-09-18 17:04:41 -04:00
Luke Parker
02a5f15535 Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-18 17:04:10 -04:00
Luke Parker
a1ef18a039 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:03:16 -04:00
Luke Parker
bec806230a Misc updates 2025-09-18 16:25:33 -04:00
Luke Parker
8bafeab5b3 Tidy serai-signals-pallet
Adds `serai-validator-sets-pallet` and `serai-signals-pallet` to the runtime.
2025-09-16 08:45:02 -04:00
Luke Parker
3722df7326 Introduce KeyShares struct to represent the amount of key shares
Improvements, bug fixes associated.
2025-09-16 08:45:02 -04:00
Luke Parker
ddb8e1398e Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-16 08:45:02 -04:00
Luke Parker
2be69b23b1 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-16 08:45:02 -04:00
Luke Parker
a82ccadbb0 Correct std-shims feature flagging 2025-09-16 08:45:02 -04:00
Luke Parker
1ff2934927 cargo update 2025-09-16 08:44:54 -04:00
Luke Parker
cd4ffa862f Remove coins, validator-sets use of Substrate's event system
We've defined our own.
2025-09-15 21:32:20 -04:00
Luke Parker
c0a4d85ae6 Restore claim_deallocation call to validator-sets pallet 2025-09-15 21:32:01 -04:00
Luke Parker
55e845fe12 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-15 21:24:10 -04:00
Luke Parker
5ea087d177 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-14 08:55:40 -04:00
Luke Parker
dd7dc0c1dc Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-12 18:26:27 -04:00
Luke Parker
c83fbb3e44 Expand std_shims::prelude to better match std::prelude 2025-09-12 18:24:56 -04:00
Luke Parker
befbbbfb84 Add the ability to bound the response's size limit to simple-request 2025-09-11 17:24:47 -04:00
Luke Parker
d0f497dc68 Latest patch-polkadot-sdk 2025-09-10 10:02:24 -04:00
Luke Parker
1b755a5d48 patch-polkadot-sdk enabling libp2p 0.56 2025-09-06 17:41:49 -04:00
Luke Parker
e5efcd56ba Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-06 14:43:21 -04:00
Luke Parker
5d60b3c2ae Update parity-db in serai-db
This synchronizes with an update to `patch-polkadot-sdk`.
2025-09-06 14:28:42 -04:00
Luke Parker
ae923b24ff Update `patch-polkadot-sdk
Allows using `libp2p 0.55`.
2025-09-06 14:04:55 -04:00
Luke Parker
d304cd97e1 Merge branch 'next' into next-polkadot-sdk 2025-09-06 04:26:10 -04:00
Luke Parker
2b56dcdf3f Update patch-polkadot-sdk for bug fixes, removal of is-terminal
Adds a deny entry for `is-terminal` to stop it from secretly reappearing.

Restores the `is-terminal` patch for `is_terminal_polyfill` to have one less
external dependency.
2025-09-06 04:25:21 -04:00
Luke Parker
865e351f96 Bitcoin 29.1
Benefits from `v2transport`, `mempoolfullrbf`, and potentially TRUC.
2025-09-06 04:09:39 -04:00
Luke Parker
ea275df26c Re-export curve25519_dalek::RistrettoPoint for dalek_ff_group::RistrettoPoint
Sacrifices a `Hash` implementation (inefficient and already shouldn't be used)
we appear to have only used in two files (which have been patched).
2025-09-05 17:40:44 -04:00
Luke Parker
90804c4c30 Update deny.toml 2025-09-05 14:08:04 -04:00
Luke Parker
46caca2f51 Update patch-polkadot-sdk to remove scale_info 2025-09-05 14:07:52 -04:00
Luke Parker
2077e485bb Add borsh impls for SignedEmbeddedEllipticCurveKeys 2025-09-05 07:21:07 -04:00
Luke Parker
28dbef8a1c Update to the latest patch-polkadot-sdk
Removes several dependencies.
2025-09-05 06:57:30 -04:00
Luke Parker
2216ade8c4 Tweak how prime-field normalizes to the even square root 2025-09-04 20:48:15 -04:00
Luke Parker
3541197aa5 Merge branch 'next' into next-polkadot-sdk 2025-09-03 16:44:26 -04:00
Luke Parker
5265cc69de hex-literal 1 2025-09-03 13:56:48 -04:00
Luke Parker
a141deaf36 Smash the singular Ciphersuite trait into multiple
This helps identify where the various functionalities are used, or rather, not
used. The `Ciphersuite` trait present in `patches/ciphersuite`, facilitating
the entire FCMP++ tree, only requires the markers _and_ canonical point
decoding. I've opened a PR to upstream such a trait into `group`
(https://github.com/zkcrypto/group/pull/68).

`WrappedGroup` is still justified for as long as `Group::generator` exists.
Moving `::generator()` to its own trait, on an independent structure (upstream)
would be massively appreciated. @tarcieri also wanted to update from
`fn generator()` to `const GENERATOR`, which would encourage further discussion
on https://github.com/zkcrypto/group/issues/32 and
https://github.com/zkcrypto/group/issues/45, which have been stagnant.

The `Id` trait is occasionally used yet really should be first off the chopping
block.

Finally, `WithPreferredHash` is only actually used around a third of the time,
which more than justifies it being a separate trait.

---

Updates `dalek_ff_group::Scalar` to directly re-export
`curve25519_dalek::Scalar`, as without issue. `dalek_ff_group::RistrettoPoint`
also could be replaced with an export of `curve25519_dalek::RistrettoPoint`,
yet the coordinator relies on how we implemented `Hash` on it for the hell of
it so it isn't worth it at this time. `dalek_ff_group::EdwardsPoint` can't be
replaced for an re-export of `curve25519_dalek::SubgroupPoint` as it doesn't
implement `zeroize`, `subtle` traits within a released, non-yanked version.
Relevance to https://github.com/serai-dex/serai/issues/201 and
https://github.com/dalek-cryptography/curve25519-dalek/issues/811#issuecomment-3247732746.

Also updates the `Ristretto` ciphersuite to prefer `Blake2b-512` over
`SHA2-512`. In order to maintain compliance with FROST's IETF standard,
`modular-frost` defines its own ciphersuite for Ristretto which still uses
`SHA2-512`.
2025-09-03 13:50:20 -04:00
Luke Parker
215e41fdb6 Remove deprecated APIs from dalek-ff-group
For backwards compatibility, we now use as a patch (as prior done with
`ciphersuite`).

Removes `crypto-bigint 0.5` from the tree and shapes up what the next release
will look like.
2025-09-03 07:05:50 -04:00
Luke Parker
41c34d7f11 Remove crypto-bigint from the public API of prime-field 2025-09-03 07:05:45 -04:00
Luke Parker
974bc82387 Remove unnecessary to_string for clone 2025-09-03 06:11:32 -04:00
Luke Parker
47ef24a7cc Remove unused patch for parking_lot_core 2025-09-03 06:11:32 -04:00
Luke Parker
a2209dd6ff Misc clippy fixes 2025-09-03 06:10:54 -04:00
Luke Parker
2032cf355f Expose coins::Pallet::transfer_internal as transfer_fn
It is safe to call and assumes no preconditions.
2025-09-03 00:48:17 -04:00
Luke Parker
fe41b09fd4 Properly handle the error in validator-sets 2025-09-02 11:07:45 -04:00
Luke Parker
74bad049a7 Add abstraction for the embedded elliptic curve keys
It's minimal but still pleasant.
2025-09-02 10:42:06 -04:00
Luke Parker
72fefb3d85 Strongly type EmbeddedEllipticCurveKeys
Adds a signed variant to validate knowledge and ownership.

Add SCALE derivations for `EmbeddedEllipticCurveKeys`
2025-09-02 10:42:02 -04:00
Luke Parker
200c1530a4 WIP changes to validator-sets
Actually use the added `Allocations` abstraction

Start using the sessions API in the validator-sets pallet

Get a `substrate/validator-sets` approximate to compiling
2025-09-02 10:41:58 -04:00
Luke Parker
5736b87b57 Remove final references to scale in coordinator/processor
Slight tweaks to processor
2025-09-02 10:41:55 -04:00
Luke Parker
ada94e8c5d Get all processors to compile again
Requires splitting `serai-cosign` into `serai-cosign` and `serai-cosign-types`
so the processor don't require `serai-client/serai` (not correct yet).
2025-09-02 02:17:10 -04:00
Luke Parker
75240ed327 Update serai-message-queue to the new serai-primitives 2025-09-02 02:17:10 -04:00
Luke Parker
6177cf5c07 Have serai-runtime compile again 2025-09-02 02:17:10 -04:00
Luke Parker
0d38dc96b6 Use serai-primitives, not serai-client, when possible in coordinator/*
Also updates `serai-coordinator-tributary` to prefer `borsh` to SCALE.
2025-09-02 02:17:10 -04:00
Luke Parker
e8094523ff Use borsh instead of SCALE within tendermint-machine, tributary-sdk
Not only does this follow our general practice, the latest SCALE has a
possibly-lossy truncation in its current implementation for `enum`s I'd like to
avoid without simply silencing.
2025-09-02 02:17:09 -04:00
Luke Parker
53a64bc7e2 Update serai-abi, and dependencies, to patch-polkadot-sdk 2025-09-02 02:17:09 -04:00
Luke Parker
c0e48867e1 Merge branch 'develop' into next 2025-09-01 21:22:28 -04:00
Mohan
2a02a8dc59 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:57:19 -04:00
Luke Parker
3c6e889732 Update Cargo.lock after rebase 2025-08-30 19:36:46 -04:00
Luke Parker
354efc0192 Add deallocate function to validator-sets session abstraction 2025-08-30 18:34:20 -04:00
Luke Parker
e20058feae Add a Sessions abstraction for validator-sets storage 2025-08-30 18:34:20 -04:00
Luke Parker
09f0714894 Add a dedicated Allocations struct for managing validator set allocations
Part of the DB abstraction necessary for this spaghetti.
2025-08-30 18:34:15 -04:00
Luke Parker
d3d539553c Restore the coins pallet to the runtime 2025-08-30 18:32:26 -04:00
Luke Parker
b08ae8e6a7 Add a non-canonical SCALE derivations feature
Enables representing IUMT within `StorageValues`. Applied to a variety of
values.

Fixes a bug where `Some([0; 32])` would be considered a valid block anchor.
2025-08-30 18:32:21 -04:00
Luke Parker
35db2924b4 Populate UnbalancedMerkleTrees in headers 2025-08-30 18:32:20 -04:00
Luke Parker
bfff823bf7 Add an UnbalancedMerkleTree primitive
The reasoning for it is documented with itself. The plan is to use it within
our header for committing to the DAG (allowing one header per epoch, yet
logarithmic proofs for any header within the epoch), the transactions
commitment (allowing logarithmic proofs of a transaction within a block,
without padding), and the events commitment (allowing logarithmic proofs of
unique events within a block, despite events not having a unique ID inherent).

This also defines transaction hashes and performs the necessary modifications
for transactions to be unique.
2025-08-30 18:32:16 -04:00
Luke Parker
352af85498 Use borsh entirely in create_db 2025-08-30 18:32:07 -04:00
Luke Parker
ecad89b269 Remove now-consolidated primitives crates 2025-08-30 18:32:06 -04:00
Luke Parker
48f5ed71d7 Skeleton ruintime with new types 2025-08-30 18:30:38 -04:00
Luke Parker
ed9cbdd8e0 Have apply return Ok even if calls failed
This ensures fees are paid, and block building isn't interrupted, even for TXs
which error.
2025-08-30 18:27:23 -04:00
Luke Parker
0ac11defcc Serialize BoundedVec not with a u32 length, but the minimum-viable uN where N%8==0
This does break borsh's definition of a Vec EXCEPT if the BoundedVec is
considered an enum. For sufficiently low bounds, this is viable, though it
requires automated code generation to be sane.
2025-08-30 18:27:23 -04:00
Luke Parker
24e89316d5 Correct distinction/flow of check/validate/apply 2025-08-30 18:27:23 -04:00
Luke Parker
3f03dac050 Make transaction an enum of Unsigned, Signed 2025-08-30 18:27:23 -04:00
Luke Parker
820b710928 Remove RuntimeCall from Transaction
I believe this was originally here as we needed to return a reference, not an
owned instance, so this caching enabled returning a reference? Regardless, it
isn't valuable now.
2025-08-30 18:27:23 -04:00
Luke Parker
88c7ae3e7d Add traits necessary for serai_abi::Transaction to be usable in-runtime 2025-08-30 18:27:22 -04:00
Luke Parker
dd5e43760d Add the UNIX timestamp (in milliseconds to the block
This is read from the BABE pre-digest when converting from a SubstrateHeader.
This causes the genesis block to have time 0 and all blocks produced with BABE
to have a time of the slot time. While the slot time is in 6-second intervals
(due to our target block time), defining in milliseconds preserves the ABI for
long-term goals (sub-second blocks).

Usage of the slot time deduplicates this field with BABE, and leaves the only
possible manipulation to propose during a slot or to not propose during a slot.

The actual reason this was implemented this way is because the Header trait is
overly restrictive and doesn't allow definition with new fields. Even if we
wanted to express the timestamp within the SubstrateHeader, we can't without
replacing Header::new and making a variety of changes to the polkadot-sdk
accordingly. Those aren't worth it at this moment compared to the solution
implemented.
2025-08-30 18:27:09 -04:00
Luke Parker
776e417fd2 Redo primitives, abi
Consolidates all primitives into a single crate. We didn't benefit from its
fragmentation. I'm hesitant to say the new internal-organization is better (it
may be just as clunky), but it's at least in a single crate (not spread out
over micro-crates).

The ABI is the most distinct. We now entirely own it. Block header hashes don't
directly commit to any BABE data (avoiding potentially ~4 KB headers upon
session changes), and are hashed as borsh (a more widely used codec than
SCALE). There are still Substrate variants, using SCALE and with the BABE data,
but they're prunable from a protocol design perspective.

Defines a transaction as a Vec of Calls, allowing atomic operations.
2025-08-30 18:26:37 -04:00
Luke Parker
2f8ce15a92 Update deny, rust-src component 2025-08-30 18:25:02 -04:00
Luke Parker
af56304676 Update the git tags
Does no actual migration work. This allows establishing the difference in
dependencies between substrate and polkadot-sdk/substrate.
2025-08-30 18:23:49 -04:00
Luke Parker
62a2c4f20e Update nightly version 2025-08-30 18:22:48 -04:00
Luke Parker
c69841710a Remove unnecessary to_string for clone 2025-08-30 18:08:08 -04:00
Luke Parker
3158590675 Remove unused patch for parking_lot_core 2025-08-30 16:20:29 -04:00
Luke Parker
263d75d380 Pin actions in the pages workflow 2025-08-30 15:48:56 -04:00
Luke Parker
030185c7fc Pin the nightly version used within the no-std tests 2025-08-30 15:40:36 -04:00
Luke Parker
e2dc5db7aa Various feature tweaks and updates 2025-08-29 06:42:37 -04:00
Luke Parker
90bc364f9f Replace Ciphersuite::hash_to_F
The prior-present `Ciphersuite::hash_to_F` was a sin. Implementations took a
DST, yet were not require to securely handle it. It was also biased towards the
requirements of `modular-frost` as `ciphersuite` was originally written all
those years ago, when `modular-frost` had needs exceeding what `ff`, `group`
satisfied.

Now, the hash is bound to produce an output which can be converted to a scalar
with `ff::FromUniformBytes`. A new `hash_to_F`, which accepts a single argument
of the value to hash (removing the potential to insecurely handle the DST by
removing the DST entirely). Due to `digest` yielding a `GenericArray`, yet
`FromUniformBytes` taking a `const usize`, the `ciphersuite` crate now defines
a `FromUniformBytes` trait taking an array (then implemented for all satisfiers
of `ff::FromUniformBytes`). In order to get the array type from the
`GenericArray`, the output of the hash, `digest` is updated to the `0.11`
release candidate which moves to `flexible-array` which solves that problem.

The existing, specific `hash_to_F` functions have been moved to `modular-frost`
as necessary.

`flexible-array` itself is patched to a fork due to
https://github.com/RustCrypto/hybrid-array/issues/131.
2025-08-29 05:21:43 -04:00
Luke Parker
a4811c9a41 Tag dalek-ff-group 0.4.6 2025-08-29 02:07:29 -04:00
Luke Parker
12cfa6b2a5 Differentiate no-std from alloc within tests/no-std
Fixes `no-std` builds for packages which intended to be `no-std` (without
`alloc`).

Updates a variety of MSRVs to 1.73 due to `flexible-transcript` no longer using
`std-shims` to achieve 1.66 (as `std-shims` requires `alloc`). A future
improvement would be for `std-shims` to have an `alloc` feature and only
provide MSRV shims without it.
2025-08-29 01:23:18 -04:00
Luke Parker
0c71b6fc4d Fix 32-bit, no-std builds of crypto limbs 2025-08-29 01:04:40 -04:00
Luke Parker
ffe1b60a11 Move the contents of the evrf/ folder to the crypto/ folder
It was justified when it had several libraries, which it no longer does thanks
to the upstreaming with monero-oxide.
2025-08-29 00:25:09 -04:00
Luke Parker
5526b8d439 Use SEC1 for the encoding of secq256k1 points, like secp256k1 does 2025-08-28 23:52:05 -04:00
Luke Parker
beac35c119 Introduce the complete point addition formulas to short-weierstrass 2025-08-28 23:16:16 -04:00
Luke Parker
62bb75e09a Move secq256k1 to short-weierstrass 2025-08-28 23:07:22 -04:00
Luke Parker
45bd376c08 Fix handling of prime/composite-order curves within short-weierstrass 2025-08-28 22:31:33 -04:00
Luke Parker
da190759a9 Move embedwards25519 over to short-weierstrass 2025-08-28 22:08:17 -04:00
Luke Parker
f2d399ba1e Add crate for working with short Weierstrass elliptic curves 2025-08-28 08:20:31 -04:00
Luke Parker
220bcbc592 Add prime-field crate
prime-field introduces a macro to generate a prime field, in its entitrety,
de-duplicating code across minimal-ed448, embedwards25519, and secq256k1.
2025-08-28 03:36:15 -04:00
Luke Parker
85949f4b04 Update from kayabaNerve/monero-oxide to monero-oxide/monero-oxide 2025-08-28 01:09:18 -04:00
Luke Parker
2f833dec77 Add job to competently check MSRVs
The prior workflow (now deleted) required manually specifying the packages to
check and only checked the package could compile under the stated MSRV. It
didn't verify it was actually the _minimum_ supported Rust version. The new
version finds the MSRV from scratch to check if the stated MSRV aligns.

Updates stated MSRVs accordingly.

Also removes many explicit dependencies from secq256k1 for their re-exports via
k256. Not directly relevant, just part of tidying up all the `toml`s.
2025-08-26 14:13:00 -04:00
Luke Parker
e3e41324c9 Update licenses 2025-08-25 10:06:35 -04:00
Luke Parker
6ed7c5d65e Update dependencies always built with optimizations 2025-08-25 09:50:13 -04:00
Luke Parker
9dddfd91c8 Fix clippy, update old dependencies 2025-08-25 09:17:29 -04:00
Luke Parker
c24b694fb2 Correct secq256k1/embedwards25519 Zeroize implementations 2025-08-25 05:06:56 -04:00
Luke Parker
738babf7e9 dkg-evrf crate
monero-oxide relies on ciphersuite, which is in-tree, yet we've made breaking
changes since. This commit adds a patch so
monero-oxide -> patches/ciphersuite -> crypto/ciphersuite, with
patches/ciphersuite resolving the breaking changes.
2025-08-25 04:49:54 -04:00
Luke Parker
33faa53b56 Remove dleq, dkg-promote, dkg-pedpop per #597
Does not move them to a new repository at this time.
2025-08-24 21:40:18 -04:00
Luke Parker
8c366107ae Merge branch 'develop' into next
This resolves the conflicts and gets the workspace `Cargo.toml`s to not be
invalid. It doesn't actually get clippy to pass again yet.

Does move `crypto/dkg/src/evrf` into a new `crypto/dkg/evrf` crate (which does
not yet compile).
2025-08-23 15:05:13 -04:00
Luke Parker
b59b1f59dd Remove ToB report PDF by request 2025-07-18 03:19:10 -04:00
Luke Parker
cc4a65e82a Add Trail of Bits audit of our Ethereum code 2025-07-12 03:29:56 -04:00
Luke Parker
4e0c58464f Update Router documentarion after following B2 (B1 redux) 2025-04-12 10:04:10 -04:00
Luke Parker
205da3fd38 Update the Ethereum processor to the Router messages including their on-chain address
This only updates the syntax. It does not yet actually route the address as
necessary.
2025-04-12 09:57:29 -04:00
Luke Parker
f7e63d4944 Have Router signatures additionally sign the Router's address (B2)
This slightly modifies the gas usage of the contract in a way breaking the
existing vector. A new, much simpler, vector has been provided instead.
2025-04-12 09:55:40 -04:00
Luke Parker
b5608fc3d2 Update dated documentation for verifySignature (B1) 2025-04-12 08:42:45 -04:00
Luke Parker
33018bf6da Explicitly ban the identity point as an Ethereum Schnorr public key (002)
This doesn't have a well-defined affine representation. k256's behavior,
mapping it to (0, 0), means this would've been rejected anyways (so this isn't
a change of any current behavior), but it's best not to rely on such an
implementation detail.
2025-04-12 08:38:06 -04:00
Luke Parker
bef90b2f1a Fix gas estimation discrepancy when gas isn't monotonic 2025-04-12 08:32:11 -04:00
Luke Parker
184c02714a alloy-core 1.0, alloy 0.14, revm 0.22 (001)
This moves to Rust 1.86 as were prior on Rust 1.81, and the new alloy
dependencies require 1.82.

The revm API changes were notable for us. Instead of relying on a modified call
instruction (with deep introspection into the EVM design), we now use the more
recent and now more prominent Inspector API. This:

1) Lets us perform far less introspection
2) Forces us to rewrite the gas estimation code we just had audited

Thankfully, it itself should be much easier to read/review, and our existing
test suite has extensively validated it.

This resolves 001 which was a concern for if/when this upgrade occurs. By doing
it now, with a dedicated test case ensuring the issue we would have had with
alloy-core 0.8 and `validate=false` isn't actively an issue, we resolve it.
2025-04-12 08:09:09 -04:00
Luke Parker
5a7b815e2e Update nightly version 2025-02-04 07:57:04 -05:00
Luke Parker
22e411981a Resolve clippy errors from recent merges 2025-01-30 05:04:28 -05:00
akildemir
11d48d0685 add Serai JSON-RPC methods (#627)
* add serai rpc methods

* fix machete & dex quote price api

* fix validators api

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2025-01-30 04:23:03 -05:00
akildemir
e4cc23b72d add economic security pallet tests (#623) 2025-01-30 04:19:12 -05:00
akildemir
52d853c8ba add validator sets pallet tests (#614)
* add validator sets pallet tests

* update tests with new types

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2025-01-30 04:16:19 -05:00
akildemir
9c33a711d7 add in instructions pallet tests (#608)
* add pallet tests

* set mock runtime AllowMint to correct type
2025-01-30 04:13:21 -05:00
Luke Parker
a275023cfc Finish merging in the develop branch 2025-01-30 03:14:24 -05:00
Luke Parker
258c02ff39 Merge branch 'develop' into next
This is an initial resolution of conflicts which does not work.
2025-01-30 00:56:29 -05:00
Luke Parker
3655dc723f Use clearer identity check in equality 2025-01-30 00:13:55 -05:00
Luke Parker
315d4fb356 Correct decoding identity for embedwards25519/secq256k1 2025-01-29 23:01:45 -05:00
Luke Parker
2bc880e372 Downstream the eVRF libraries from FCMP++
Also adds no-std support to secq256k1 and embedwards25519.
2025-01-29 22:29:40 -05:00
Luke Parker
19422de231 Ensure a non-zero fee in the Router OutInstruction gas fuzz test 2025-01-27 15:39:55 -05:00
Luke Parker
fa0dadc9bd Rename Deployer bytecode to initcode 2025-01-27 15:39:06 -05:00
Luke Parker
f004c8726f Remove unused library bytecode from ethereum-schnorr-contract 2025-01-27 15:38:44 -05:00
Luke Parker
835b5bb06f Split tests across a few files, fuzz generate OutInstructions
Tests successful gas estimation even with more complex behaviors.
2025-01-27 13:59:11 -05:00
Luke Parker
0484113254 Fix the ability for a malicious adversary to snipe ERC20s out via re-entrancy from the ERC20 contract 2025-01-27 13:07:35 -05:00
Luke Parker
17cc10b3f7 Test Execute result decoding, reentrancy 2025-01-27 13:01:52 -05:00
Luke Parker
7e01589fba Erc20::approve for DestinationType::Contract
This allows the CREATE code to bork without the Serai router losing access to
the coins in question. It does incur overhead on the deployed contract, which
now no longer just has to query its balance but also has to call the
transferFrom, but its a safer pattern and not a UX detriment.

This also improves documentation.
2025-01-27 11:58:39 -05:00
Luke Parker
f8c3acae7b Check the Router-deployed contracts' code 2025-01-27 07:48:37 -05:00
Luke Parker
0957460f27 Add supporting security commentary to Router.sol 2025-01-27 07:36:23 -05:00
Luke Parker
ea00ba9ff8 Clarified usage of CREATE
CREATE was originally intended for gas savings. While one sketch did move to
CREATE2, the security concerns around address collisions (requiring all init
codes not be malleable to achieve security) continue to justify this.

To resolve the gas estimation concerns raised in the prior commit, the
createAddress function has been made constant-gas.
2025-01-27 07:36:13 -05:00
Luke Parker
a9625364df Test createAddress
Benchmarks gas usage

Note the estimator needs to be updated as this is now variable-gas to the
state.
2025-01-27 05:37:56 -05:00
Luke Parker
75c6427d7c CREATE uses RLP, not ABI-encoding 2025-01-27 04:24:25 -05:00
Luke Parker
e742a6b0ec Test ERC20 OutInstructions 2025-01-27 02:08:01 -05:00
Luke Parker
5164a710a2 Redo gas estimation via revm
Adds a minimal amount of packages. Does add decent complexity. Avoids having
constants which aren't exact, due to things like the quadratic memory cost, and
the issues with such estimates accordingly.
2025-01-26 22:42:50 -05:00
Luke Parker
27c1dc4646 Test ETH address/code OutInstructions 2025-01-24 18:46:17 -05:00
Luke Parker
3892fa30b7 Test an empty execute 2025-01-24 17:13:36 -05:00
Luke Parker
ed599c8ab5 Have the Batch event encode the amount of results
Necessary to distinguish a bitvec with 1 results from a bitvec with 7 results.
2025-01-24 17:04:25 -05:00
Luke Parker
29bb5e21ab Take advantage of RangeInclusive for specifying filters' blocks 2025-01-24 07:44:47 -05:00
Luke Parker
604a4b2442 Add execute_tx to fill in missing test cases reliant on it 2025-01-24 07:33:36 -05:00
Luke Parker
977dcad86d Test the Router rejects invalid signatures 2025-01-24 07:22:43 -05:00
Luke Parker
cefc542744 Test SeraiKeyWasNone 2025-01-24 06:58:54 -05:00
Luke Parker
164fe9a14f Test Router's InvalidSeraiKey error 2025-01-24 06:41:24 -05:00
Luke Parker
f948881eba Simplify async code in in_instructions_unordered
Outsources fetching the ERC20 events to top_level_transfers_unordered.
2025-01-24 05:43:04 -05:00
Luke Parker
201b675031 Test ERC20 InInstructions 2025-01-24 03:45:04 -05:00
Luke Parker
3d44766eff Add ERC20 InInstruction test 2025-01-24 03:23:58 -05:00
Luke Parker
a63a86ba79 Test Ether InInstructions 2025-01-23 09:30:54 -05:00
Luke Parker
e922264ebf Add selector collisions to the IERC20 lib 2025-01-23 08:25:59 -05:00
Luke Parker
7e53eff642 Fix the async flow with the Router
It had sequential async calls with complexity O(n), with a variety of redundant
calls. There was also a constant of... 4? 5? for each item. Now, the total
sequence depth is just 3-4.
2025-01-23 06:16:58 -05:00
Luke Parker
669b8b776b Work on testing the Router
Completes the `Executed` enum in the router. Adds an `Escape` struct. Both are
needed for testing purposes.

Documents the gas constants in intent and reasoning.

Adds modernized tests around key rotation and the escape hatch.

Also updates the rest of the codebase which had accumulated errors.
2025-01-23 02:06:06 -05:00
Luke Parker
6508957cbc Make a proper nonReentrant modifier
A transaction couldn't call execute twice within a single TX prior. Now, it
can.

Also adds a bit more context to the escape hatch events/errors.
2025-01-23 00:04:44 -05:00
Luke Parker
373e794d2c Check the escaped to address has code set
Document choice not to use a confirmation flow there as well.
2025-01-22 22:45:51 -05:00
Luke Parker
c8f3a32fdf Replace custom read/write impls in router with borsh 2025-01-21 03:49:29 -05:00
Luke Parker
f690bf831f Remove old code still marked TODO 2025-01-19 02:36:34 -05:00
Luke Parker
0b30ac175e Restore workspace-wide clippy
Fixes accumulated errors in the Substrate code. Modifies the runtime build to
work with a modern clippy. Removes e2e tests from the workspace.
2025-01-19 02:27:35 -05:00
Luke Parker
47560fa9a9 Test manually implemented serializations in the Router lib 2025-01-19 00:45:26 -05:00
Luke Parker
9d57c4eb4d Downscope dependencies in serai-processor-ethereum-primitives, const-hex decode bytecode in ethereum-schnorr-contract 2025-01-19 00:16:50 -05:00
Luke Parker
642ba00952 Update Deployer README, 80-character line length 2025-01-19 00:03:56 -05:00
Luke Parker
3c9c12d320 Test the Deployer contract 2025-01-18 23:58:38 -05:00
Luke Parker
f6b52b3fd3 Maximum line length of 80 in Deployer.sol 2025-01-18 15:22:58 -05:00
Luke Parker
0d906363a0 Simplify and test deterministically_sign 2025-01-18 15:13:39 -05:00
Luke Parker
8222ce78d8 Correct accumulated errors in the processor 2025-01-18 12:41:57 -05:00
Luke Parker
cb906242e7 2025 nightly
Supersedes #640.
2025-01-18 12:41:25 -05:00
Luke Parker
2a19e9da93 Update to libp2p 0.54
This is the same libp2p Substrate uses as of
https://github.com/paritytech/polkadot-sdk/pull/6248.
2025-01-17 04:50:15 -05:00
Luke Parker
2226dd59cc Comment all dependencies in substrate/node
Causes the Cargo.lock to no longer include the substrate dependencies
(including its copy of libp2p).
2025-01-17 04:09:27 -05:00
Luke Parker
be2098d2e1 Remove Serai from the ConfirmDkgTask 2025-01-15 21:00:50 -05:00
Luke Parker
6b41f32371 Correct handling of InvalidNonce within the coordinator 2025-01-15 20:48:54 -05:00
Luke Parker
19b87c7f5a Add the DKG confirmation flow
Finishes the coordinator redo
2025-01-15 20:29:57 -05:00
Luke Parker
505f1b20a4 Correct re-attempts for the DKG Confirmation protocol
Also spawns the SetKeys task.
2025-01-15 17:49:41 -05:00
Luke Parker
8b52b921f3 Have the Tributary scanner yield DKG confirmation signing protocol data 2025-01-15 15:16:30 -05:00
Luke Parker
f36bbcba25 Flatten the map of preprocesses/shares, send Participant index with DkgParticipation 2025-01-15 14:24:51 -05:00
Luke Parker
167826aa88 Implement SeraiAddress <-> Participant mapping and add RemoveParticipant transactions 2025-01-15 12:51:35 -05:00
Luke Parker
bea4f92b7a Fix parity-db builds for the Coordinator 2025-01-15 12:10:11 -05:00
Luke Parker
7312fa8d3c Spawn PublishSlashReportTask
Updates it so that it'll try for every network instead of returning after any
network fails.

Uses the SlashReport type throughout the codebase.
2025-01-15 12:08:28 -05:00
Luke Parker
92a4cceeeb Spawn PublishBatchTask
Also removes the expectation Batches published via it are sent in an ordered
fashion. That won't be true if the signing protocols complete out-of-order (as
possible when we are signing them in parallel).
2025-01-15 11:21:55 -05:00
Luke Parker
3357181fe2 Handle sign::ProcessorMessage::[Preprocesses, Shares] 2025-01-15 10:47:47 -05:00
Luke Parker
7ce5bdad44 Don't add transactions for topics which have yet to be recognized 2025-01-15 07:01:24 -05:00
Luke Parker
0de3fda921 Further space out requests for cosigns from the network 2025-01-15 05:59:56 -05:00
Luke Parker
cb410cc4e0 Correct how we handle rounding errors within the penalty fn
We explicitly no longer slash stakes but we still set the maximum slash to the
allocated stake + the rewards. Now, the reward slash is bound to the rewards
and the stake slash is bound to the stake. This prevents an improperly rounded
reward slash from effecting a stake slash.
2025-01-15 02:46:31 -05:00
Luke Parker
6c145a5ec3 Disable offline, disruptive slashes
Reasoning commented in codebase
2025-01-14 11:44:13 -05:00
Luke Parker
a7fef2ba7a Redesign Slash/SlashReport types with a function to calculate the penalty 2025-01-14 07:51:39 -05:00
Luke Parker
291ebf5e24 Have serai-task warnings print with the name of the task 2025-01-14 02:52:26 -05:00
Luke Parker
5e0e91c85d Add tasks to publish data onto Serai 2025-01-14 01:58:26 -05:00
Luke Parker
b5a6b0693e Add a proper error type to ContinuallyRan
This isn't necessary. Because we just log the error, we never match off of it,
we don't need any structure beyond String (or now Debug, which still gives us
a way to print the error). This is for the ergonomics of not having to
constantly write `.map_err(|e| format!("{e:?}"))`.
2025-01-12 18:29:08 -05:00
Luke Parker
3cc2abfedc Add a task to publish slash reports 2025-01-12 17:47:48 -05:00
Luke Parker
0ce9aad9b2 Add flow to add transactions onto Tributaries 2025-01-12 07:32:45 -05:00
Luke Parker
e35aa04afb Start handling messages from the processor
Does route ProcessorMessage::CosignedBlock. Rest are stubbed with TODO.
2025-01-12 06:07:55 -05:00
Luke Parker
e7de5125a2 Have processor-messages use CosignIntent/SignedCosign, not the historic cosign format
Has yet to update the processor accordingly.
2025-01-12 05:52:33 -05:00
Luke Parker
158140c3a7 Add a proper error for intake_cosign 2025-01-12 05:49:17 -05:00
Luke Parker
df9a9adaa8 Remove direct dependencies of void, async-trait 2025-01-12 03:48:43 -05:00
Luke Parker
d854807edd Make message_queue::client::Client::send fallible
Allows tasks to report the errors themselves and handle retry in our
standardized way.
2025-01-11 21:57:58 -05:00
Luke Parker
f501d46d44 Correct disabling of Nagle's algorithm 2025-01-11 06:54:43 -05:00
Luke Parker
74106b025f Publish SlashReport onto the Tributary 2025-01-11 06:51:55 -05:00
Luke Parker
e731b546ab Update documentation 2025-01-11 05:13:43 -05:00
Luke Parker
77d60660d2 Move spawn_cosign from main.rs into tributary.rs
Also refines the tasks within tributary.rs a good bit.
2025-01-11 05:12:56 -05:00
Luke Parker
3c664ff05f Re-arrange coordinator/
coordinator/tributary was tributary-chain. This crate has been renamed
tributary-sdk and moved to coordinator/tributary-sdk.

coordinator/src/tributary was our instantion of a Tributary, the Transaction
type and scan task. This has been moved to coordinator/tributary.

The main reason for this was due to coordinator/main.rs becoming untidy. There
is now a collection of clean, independent APIs present in the codebase.
coordinator/main.rs is to compose them. Sometimes, these compositions are a bit
silly (reading from a channel just to forward the message to a distinct
channel). That's more than fine as the code is still readable and the value
from the cleanliness of the APIs composed far exceeds the nits from having
these odd compositions.

This breaks down a bit as we now define a global database, and have some APIs
interact with multiple other APIs.

coordinator/src/tributary was a self-contained, clean API. The recently added
task present in coordinator/tributary/mod.rs, which bound it to the rest of the
Coordinator, wasn't.

Now, coordinator/src is solely the API compositions, and all self-contained
APIs are their own crates.
2025-01-11 04:14:21 -05:00
Luke Parker
c05b0c9eba Handle Canonical, NewSet from serai-coordinator-substrate 2025-01-11 03:07:15 -05:00
Luke Parker
6d5049cab2 Move the task providing transactions onto the Tributary to the Tributary module
Slims down the main file a bit
2025-01-11 02:13:23 -05:00
Luke Parker
1419ba570a Route from tributary scanner to message-queue 2025-01-11 01:55:36 -05:00
Luke Parker
542bf2170a Provide Cosign/CosignIntent for Tributaries 2025-01-11 01:31:28 -05:00
Luke Parker
378d6b90cf Delete old Tributaries on reboot 2025-01-10 20:10:05 -05:00
Luke Parker
cbe83956aa Flesh out Coordinator main
Lot of TODOs as the APIs are all being routed together.
2025-01-10 02:24:24 -05:00
Luke Parker
091d485fd8 Have the Tributary scanner DB be distinct from the cosign DB
Allows deleting the entire Tributary scanner DB upon retiry.
2025-01-10 02:22:58 -05:00
Luke Parker
2a3eaf4d7e Wrap the entire Libp2p object in an Arc
Makes `Clone` calls significantly cheaper as now only the outer Arc is cloned
(the inner ones have been removed). Also wraps uses of Serai in an Arc as we
shouldn't actually need/want multiple caller connection pools.
2025-01-10 01:26:07 -05:00
Luke Parker
23122712cb Document validator jailing upon participation failures and slash report determination
These are TODOs. I just wanted to ensure this was written down and each seemed
too small for GH issues.
2025-01-09 19:50:39 -05:00
Luke Parker
47eb793ce9 Slash upon Tendermint evidence
Decoding slash evidence requires specifying the instantiated generic
`TendermintNetwork`. While irrelevant, that generic includes a type satisfying
`tributary::P2p`. It was only possible to route now that we've redone the P2P
API.
2025-01-09 06:58:00 -05:00
Luke Parker
9b0b5fd1e2 Have serai-cosign index finalized blocks' numbers 2025-01-09 06:57:26 -05:00
Luke Parker
893a24a1cc Better document bounds in serai-coordinator-p2p 2025-01-09 06:57:12 -05:00
Luke Parker
b101e2211a Complete serai-coordinator-p2p 2025-01-09 06:23:14 -05:00
Luke Parker
201a444e89 Remove tokio dependency from serai-coordinator-p2p
Re-implements tokio::mpsc::oneshot with a thin wrapper around async-channel.

Also replaces futures-util with futures-lite.
2025-01-09 02:16:05 -05:00
Luke Parker
9833911e06 Promote Request::Heartbeat from an enum variant to a struct 2025-01-09 01:41:42 -05:00
Luke Parker
465e8498c4 Make the coordinator's P2P modules their own crates 2025-01-09 01:26:25 -05:00
Luke Parker
adf20773ac Add libp2p module documentation 2025-01-09 00:40:07 -05:00
Luke Parker
295c1bd044 Document improper handling of session rotation in P2P allow list 2025-01-09 00:16:45 -05:00
Luke Parker
dda6e3e899 Limit each peer to one connection
Prevents dialing the same peer multiple times (successfully).
2025-01-09 00:06:51 -05:00
Luke Parker
75a00f2a1a Add allow_block_list to libp2p
The check in validators prevented connections from non-validators.
Non-validators could still participate in the network if they laundered their
connection through a malicious validator. allow_block_list ensures that peers,
not connections, are explicitly limited to validators.
2025-01-08 23:54:27 -05:00
Luke Parker
6cde2bb6ef Correct and document topic subscription 2025-01-08 23:16:04 -05:00
Luke Parker
20326bba73 Replace KeepAlive with ping
This is more standard and allows measuring latency.
2025-01-08 23:01:36 -05:00
Luke Parker
ce83b41712 Finish mapping Libp2p to the P2p trait API 2025-01-08 19:39:09 -05:00
Luke Parker
b2bd5d3a44 Remove Debug bound on tributary::P2p 2025-01-08 17:40:32 -05:00
Luke Parker
de2d6568a4 Actually implement the Peer abstraction for Libp2p 2025-01-08 17:40:08 -05:00
Luke Parker
fd9b464b35 Add a trait for the P2p network used in the coordinator
Moves all of the Libp2p code to a dedicated directory. Makes the Heartbeat task
abstract over any P2p network.
2025-01-08 17:01:37 -05:00
Luke Parker
376a66b000 Remove async-trait from tendermint-machine, tributary-chain 2025-01-08 16:41:11 -05:00
Luke Parker
2121a9b131 Spawn the task to select validators to dial 2025-01-07 18:17:36 -05:00
Luke Parker
419223c54e Build the swarm
Moves UpdateSharedValidatorsTask to validators.rs. While prior planned to
re-use a validators object across connecting and peer state management, the
current plan is to use an independent validators object for each to minimize
any contention. They should be built infrequently enough, and cheap enough to
update in the majority case (due to quickly checking if an update is needed),
that this is fine.
2025-01-07 18:09:25 -05:00
Luke Parker
a731c0005d Finish routing our own channel abstraction around the Swarm event stream 2025-01-07 16:51:56 -05:00
Luke Parker
f27e4e3202 Move the WIP SwarmTask to its own file 2025-01-07 16:34:19 -05:00
Luke Parker
f55165e016 Add channels to send requests/recv responses 2025-01-07 15:51:15 -05:00
Luke Parker
d9e9887d34 Run the dial task whenever we have a peer disconnect 2025-01-07 15:36:42 -05:00
Luke Parker
82e753db30 Document risk of eclipse in the dial task 2025-01-07 15:35:34 -05:00
Luke Parker
052388285b Remove TaskHandle::close
TaskHandle::close meant run_now may panic if the task was closed. Now, tasks
are only closed when all handles are dropped, causing all handles to point to
running tasks (ensuring run_now won't panic).
2025-01-07 15:26:41 -05:00
Luke Parker
47a4e534ef Update serai-processor-signers to VariantSignid::Batch([u8; 32]) 2025-01-07 15:26:23 -05:00
Luke Parker
257f691277 Start filling out message handling in SwarmTask 2025-01-05 01:23:28 -05:00
Luke Parker
c6d0fb477c Inline noise into OnlyValidators
libp2p does support (noise, OnlyValidators) but it'll interpret it as either,
not a chain. This will act as the desired chain.
2025-01-05 00:55:25 -05:00
Luke Parker
96518500b1 Don't hold the shared Validators write lock while making requests to Serai 2025-01-05 00:29:11 -05:00
Luke Parker
2b8f481364 Parallelize requests within Validators::update 2025-01-05 00:17:05 -05:00
Luke Parker
479ca0410a Add commentary on the use of FuturesOrdered 2025-01-04 23:28:54 -05:00
Luke Parker
9a5a661d04 Start on the task to manage the swarm 2025-01-04 23:28:29 -05:00
Luke Parker
3daeea09e6 Only let active Serai validators connect over P2P 2025-01-04 22:21:23 -05:00
Luke Parker
a64e2004ab Dial new peers when we don't have the target amount 2025-01-04 18:04:24 -05:00
Luke Parker
f9f6d40695 Use Serai validator keys as PeerIds 2025-01-04 18:03:37 -05:00
Luke Parker
4836c1676b Don't consider the Serai set in the cosigning protocol
The Serai set SHOULD be banned from setting keys so this SHOULD be unreachable.
It's now explicitly unreachable.
2025-01-04 13:52:17 -05:00
Luke Parker
985261574c Add gossip behavior back to the coordinator 2025-01-03 14:00:20 -05:00
Luke Parker
3f3b0255f8 Tweak heartbeat task to run less often if there's no progress to be made 2025-01-03 13:59:14 -05:00
Luke Parker
5fc8500f8d Add task to heartbeat a tributary to the P2P code 2025-01-03 13:04:27 -05:00
Luke Parker
49c221cca2 Restore request-response code to the coordinator 2025-01-03 13:02:50 -05:00
Luke Parker
906e2fb669 Start cosigning on Cosign or Cosigned, not just on Cosigned 2025-01-03 10:30:39 -05:00
Luke Parker
ce676efb1f cargo update 2025-01-03 07:01:06 -05:00
Luke Parker
0a611cb155 Further flesh out tributary scanning
Renames `label` to `round` since `Label` was renamed to `SigningProtocolRound`.

Adds some more context-less validation to transactions which used to be done
within the custom decode function which was simplified via the usage of borsh.

Documents in processor-messages where the Coordinator sends each of its
messages.
2025-01-03 06:57:28 -05:00
Luke Parker
bcd3f14f4f Start work on cleaning up the coordinator's tributary handling 2025-01-02 09:11:04 -05:00
Luke Parker
6272c40561 Restore block_hash to Batch
It's not only helpful (to easily check where Serai's view of the external
network is) but it's necessary in case of a non-trivial chain fork to determine
which blockchain Serai considers canonical.
2024-12-31 18:10:47 -05:00
Luke Parker
2240a50a0c Rebroadcast cosigns for the currently evaluated session, not the latest intended
If Substrate has a block 500 with a key gen, and a block 600 with a key gen,
and the session starting on 500 never cosigns everything, everyone up-to-date
will want the cosigns for the session starting on block 500. Everyone
up-to-date will also be rebroadcasting the non-existent cosigns for the session
which has yet to start. This wouldn't cause a stall as eventually, each
individual set would cosign the latest notable block, and then that would be
explicitly synced, but it's still not the intended behavior.

We also won't even intake the cosigns for the latest intended session if it
exceeds the session we're currently evaluating. This does mean those behind on
the cosigning protocol wouldn't have rebroadcasted their historical cosigns,
and now will, but that's valuable as we don't actually know if we're behind or
up-to-date (per above posited issue).
2024-12-31 17:17:12 -05:00
Luke Parker
7e2b31e5da Clean the transaction definitions in the coordinator
Moves to borsh for serialization. No longer includes nonces anywhere in the TX.
2024-12-31 12:14:32 -05:00
Luke Parker
8c9441a1a5 Redo coordinator's Substrate scanner 2024-12-31 10:37:19 -05:00
Luke Parker
5a42f66dc2 alloy 0.9 2024-12-30 11:09:09 -05:00
Luke Parker
b584a2beab Remove old DB entry from the scanner
We read from it but never writ to it.

It was used to check we didn't flag a block as notable after reporting it, but
it was called by the scan task as it scanned a block. We only batch/report
blocks after the scan task after scanning them, so it was very redundant.
2024-12-30 11:07:05 -05:00
Luke Parker
26ccff25a1 Split reporting Batches to the signer from the Batch test 2024-12-30 11:03:52 -05:00
Luke Parker
f0094b3c7c Rename Report task to Batch task 2024-12-30 10:49:35 -05:00
Luke Parker
458f4fe170 Move where we check if we should delay reporting of Batches 2024-12-30 10:18:38 -05:00
Luke Parker
1de8136739 Remove Session from VariantSignId::SlashReport
It's only there to make the VariantSignid unique across Sessions. By localizing
the VariantSignid to a Session, we avoid this, and can better ensure we don't
queue work for historic sessions.
2024-12-30 06:16:03 -05:00
Luke Parker
445c49f030 Have the scanner's report task ensure handovers only occur if Batchs are valid
This is incomplete at this time. The logic is fine, but needs to be moved to a
distinct location to handle singular blocks which produce multiple Batches.
2024-12-30 06:11:47 -05:00
Luke Parker
5b74fc8ac1 Merge ExternalKeyForSessionToSignBatch into InfoForBatch 2024-12-30 05:34:13 -05:00
Luke Parker
e67e301fc2 Have the processor verify the published Batches match expectations 2024-12-30 05:21:26 -05:00
Luke Parker
1d50792eed Document serai-db with bounds and intent 2024-12-26 02:35:32 -05:00
Luke Parker
9c92709e62 Delay cosign acknowledgments 2024-12-26 01:04:20 -05:00
Luke Parker
3d15710a43 Only check the cosign is after its start block if faulty
We don't have consensus on the session's last block, so we shouldn't check if
the cosign is before the session ends. What matters is that network, within its
set, claims it's still active at that block (on its view of the blockchain).
2024-12-26 00:26:48 -05:00
Luke Parker
df06da5552 Only check if the cosign is stale if it isn't faulty
If it is faulty, we want to archive it regardless.
2024-12-26 00:24:48 -05:00
Luke Parker
cef5bc95b0 Revert prior commit
An archive of all GlobalSessions is necessary to check for faults. The storage
cost is also minimal. While it should be avoided if it can be, it can't be
here.
2024-12-26 00:15:49 -05:00
Luke Parker
f336ab1ece Remove GlobalSessions DB entry
If we read the currently-being-evaluated session from the evaluator, we can
avoid paying the storage costs on all sessions ad-infinitum.
2024-12-25 23:57:51 -05:00
Luke Parker
2aebfb21af Remove serai from the cosign evaluator 2024-12-25 23:47:21 -05:00
Luke Parker
56af6c44eb Remove usage of serai from intake_cosign 2024-12-25 21:19:04 -05:00
Luke Parker
4b34be05bf rocksdb 0.23 2024-12-25 19:48:48 -05:00
Luke Parker
5b337c3ce8 Prevent a malicious validator set from overwriting a notable cosign
Also prevents panics from an invalid Serai node (removing the assumption of an
honest Serai node).
2024-12-25 02:11:05 -05:00
Luke Parker
e119fb4c16 Replace Cosigns by extending NetworksLatestCosignedBlock
Cosigns was an archive of every single cosign ever received. By scoping
NetworksLatestCosignedBlock to be by the global session, we have the latest
cosign for each network in a session (valid to replace all prior cosigns by
that network within that session, even for the purposes of fault) and
automatically have the notable cosigns indexed (as they are the latest ones
within their session). This not only saves space yet also allows optimizing
evaluation a bit.
2024-12-25 01:45:37 -05:00
Luke Parker
ef972b2658 Add cosign signature verification 2024-12-25 00:06:46 -05:00
Luke Parker
4de1a5804d Dedicated library for intending and evaluating cosigns
Not only cleans the existing cosign code but enables non-Serai-coordinators to
evaluate cosigns if they gain access to a feed of them (such as over an RPC).
This would let centralized services not only track the finalized chain yet the
cosigned chain without directly running a coordinator.

Still being wrapped up.
2024-12-22 06:41:55 -05:00
Luke Parker
147a6e43d0 Split task from serai-processor-primitives into serai-task 2024-12-19 10:08:13 -05:00
Luke Parker
066aa9eda4 cargo update
Resolves RUSTSEC-2024-0421
2024-12-12 00:45:19 -05:00
Luke Parker
9593a428e3 alloy 0.8 2024-12-11 01:02:58 -05:00
Luke Parker
5b3c5ec02b Basic Ethereum escapeHatch test 2024-12-09 02:00:17 -05:00
Luke Parker
9ccfa8a9f5 Fix deny 2024-12-08 22:01:43 -05:00
Luke Parker
18897978d0 thiserror 2.0, cargo update 2024-12-08 21:55:37 -05:00
Luke Parker
3192370484 Add Serai key confirmation to prevent rotating to an unusable key
Also updates alloy to the latest version
2024-12-08 20:42:37 -05:00
Luke Parker
8013c56195 Add/correct msrv labels 2024-12-08 18:27:15 -05:00
Luke Parker
834c16930b Add a bitmask of OutInstruction events to Executed
Allows explorers to provide clarity on what occurred.
2024-11-02 21:00:01 -04:00
Luke Parker
2920987173 Add a re-entrancy guard to Router.execute 2024-11-02 20:12:48 -04:00
Luke Parker
26230377b0 Define IRouterWithoutCollisions which Router inherits from
This ensures Router implements most of IRouterWithoutCollisions. It solely
leaves us to confirm Router implements the extensions defined in IRouter.
2024-11-02 19:10:39 -04:00
Luke Parker
2f5c0c68d0 Add selector collisions to Router to make it IRouter compatible 2024-11-02 18:13:02 -04:00
Luke Parker
8de42cc2d4 Add IRouter 2024-11-02 13:19:07 -04:00
Luke Parker
cf4123b0f8 Update how signatures are handled by the Router 2024-11-02 10:47:09 -04:00
Luke Parker
6a520a7412 Work on testing the Router 2024-10-31 02:23:59 -04:00
Luke Parker
b2ec58a445 Update serai-ethereum-processor to compile 2024-10-30 21:48:40 -04:00
Luke Parker
8e800885fb Simplify deterministic signing process in serai-processor-ethereum-primitives
This should be easier to specify/do an alternative implementation of.
2024-10-30 21:36:31 -04:00
Luke Parker
2a427382f1 Natspec, slither Deployer, Router 2024-10-30 21:35:43 -04:00
Luke Parker
ce1689b325 Expand tests for ethereum-schnorr-contract 2024-10-28 18:08:31 -04:00
Luke Parker
0b61a75afc Add lint against string slicing
These are tricky as it panics if the slice doesn't hit a UTF-8 codepoint
boundary.
2024-10-02 21:58:48 -04:00
Luke Parker
2aee21e507 Fix decomposition -> divisor points vartime due to branch prediction/cache rules 2024-09-29 04:19:16 -04:00
Luke Parker
b3e003bd5d cargo +nightly fmt 2024-09-25 10:22:49 -04:00
Luke Parker
251a6e96e8 Constant-time divisors (#617)
* WIP constant-time implementation of the ec-divisors library

* Fix misc logic errors in poly.rs

* Remove accidentally committed test statements

* Fix ConstantTimeEq for CoefficientIndex

* Correct the iterations formula

x**3 / (0 y + x**1) would prior be considered indivisible with iterations = 0.
It is divisible however. The amount of iterations should be the amount of
coefficients within the numerator *excluding the coefficient for y**0 x**0*.

* Poly PartialEq, conditional_select_poly which checks poly structure equivalence

If the first passed argument is smaller than the latter, it's padded to the
necessary length.

Also adds code to trim the remainder as the remainder is the value modulo, so
it's very important it remains concise and workable.

* Fix the line function

It selected the case if both were identity before selecting the case if either
were identity, the latter overwriting the former.

* Final fixes re: ct_get

1) Our quotient structure does need to be of size equal to the numerator
   entirely to prevent out-of-bounds reads on it
2) We need to get from yx_coefficients if of length >=, so if the length is 1
   we can read y_pow=1 from it. If y_pow=0, and its length is 0 so it has no
   inner Vecs, we need to fall back with the guard y_pow != 0.

* Add a trim algorithm to lib.rs to prevent Polys from becoming unbearably gigantic

Our Poly algorithm is incredibly leaky. While it presumably should be improved,
we can take advantage of our known structure while constructing divisors (and
the small modulus) to simply trim out the zero coefficients leaked. This
maintains Polys in a manageable size.

* Move constant-time scalar mul gadget divisor creation from dkg to ec-divisors

Anyone creating a divisor for the scalar mul gadget should use constant time
code, so this code should at least be in the EC gadgets crate It's of
non-trivial complexity to deal with otherwise.

* Remove unsafe, cache timing attacks from ec-divisors
2024-09-24 17:27:05 -04:00
Luke Parker
2c8af04781 machete, drain > mem::swap for clarity reasons 2024-09-19 23:36:32 -07:00
Luke Parker
a0ed043372 Move old processor/src directory to processor/TODO 2024-09-19 23:36:32 -07:00
Luke Parker
2984d2f8cf Misc comments 2024-09-19 23:36:32 -07:00
Luke Parker
554c5778e4 Don't track deployment block in the Router
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we
did sync to that point), then check if the Router was deployed, yet at that
very moment the node resets to genesis. By ensuring the Router is deployed, we
avoid this (and don't need to track the deployment block in-contract).

Also uses a JoinSet to sync the 32 blocks in parallel.
2024-09-19 23:36:32 -07:00
Luke Parker
7e4c59a0a3 Have the Router track its deployment block
Prevents a consensus split where some nodes would drop transfers if their node
didn't think the Router was deployed, and some would handle them.
2024-09-19 23:36:32 -07:00
Luke Parker
294462641e Don't have the ERC20 collapse the top-level transfer ID to the transaction ID
Uses the ID of the transfer event associated with the top-level transfer.
2024-09-19 23:36:32 -07:00
Luke Parker
ae76749513 Transfer ETH with CREATE, not prior to CREATE
Saves a few thousand gas.
2024-09-19 23:36:32 -07:00
Luke Parker
1e1b821d34 Report a Change Output with every Eventuality to ensure we don't fall out of synchrony 2024-09-19 23:36:32 -07:00
Luke Parker
702b4c860c Add dummy fee values to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bc1bbf9951 Set a fixed fee transferred to the caller for publication
Avoids the risk of the gas used by the contract exceeding the gas presumed to
be used (causing an insolvency).
2024-09-19 23:36:32 -07:00
Luke Parker
ec9211fd84 Remove accidentally included bitcoin feature from processor-bin 2024-09-19 23:36:32 -07:00
Luke Parker
4292660eda Have the Ethereum scheduler create Batches as necessary
Also introduces the fee logic, despite it being stubbed.
2024-09-19 23:36:32 -07:00
Luke Parker
8ea5acbacb Update the Router smart contract to pay fees to the caller
The caller is paid a fixed fee per unit of gas spent. That arguably
incentivizes the publisher to raise the gas used by internal calls, yet this
doesn't effect the user UX as they'll have flatly paid the worst-case fee
already. It does pose a risk where callers are arguably incentivized to cause
transaction failures which consume all the gas, not just increased gas, yet:

1) Modern smart contracts don't error by consuming all the gas
2) This is presumably infeasible
3) Even if it was feasible, the gas fees gained presumably exceed the gas fees
   spent causing the failure

The benefit to only paying the callers for the gas used, not the gas alotted,
is it allows Serai to build up a buffer. While this should be minor, a few
cents on every transaction at best, if we ever do have any costs slip through
the cracks, it ideally is sufficient to handle those.
2024-09-19 23:36:32 -07:00
Luke Parker
1b1aa74770 Correct forge fmt config 2024-09-19 23:36:32 -07:00
Luke Parker
861a8352e5 Update to the latest bitcoin-serai 2024-09-19 23:36:32 -07:00
Luke Parker
e64827b6d7 Mark files in TODO/ with "TODO" to ensure it pops up on search 2024-09-19 23:36:32 -07:00
Luke Parker
c27aaf8658 Merge BlockWithAcknowledgedBatch and BatchWithoutAcknowledgeBatch
Offers a simpler API to the coordinator.
2024-09-19 23:36:32 -07:00
Luke Parker
53567e91c8 Read NetworkId from ScannerFeed trait, not env 2024-09-19 23:36:32 -07:00
Luke Parker
1a08d50e16 Remove unused code in the Ethereum processor 2024-09-19 23:36:32 -07:00
Luke Parker
855e53164e Finish Ethereum ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
1367e41510 Add hooks to the main loop
Lets the Ethereum processor track the first key set as soon as it's set.
2024-09-19 23:36:32 -07:00
Luke Parker
a691be21c8 Call tidy_keys upon queue_key
Prevents the potential case of the substrate task and the scan task writing to
the same storage slot at once.
2024-09-19 23:36:32 -07:00
Luke Parker
673cf8fd47 Pass the latest active key to the Block's scan function
Effectively necessary for networks on which we utilize account abstraction in
order to know what key to associate the received coins with.
2024-09-19 23:36:32 -07:00
Luke Parker
118d81bc90 Finish the Ethereum TX publishing code 2024-09-19 23:36:32 -07:00
Luke Parker
e75c4ec6ed Explicitly add an unspendable script path to the processor's generated keys 2024-09-19 23:36:32 -07:00
Luke Parker
9e628d217f cargo fmt, move ScannerFeed from String to the RPC error 2024-09-19 23:36:32 -07:00
Luke Parker
a717ae9ea7 Have the TransactionPublisher build a TxLegacy from Transaction 2024-09-19 23:36:32 -07:00
Luke Parker
98c3f75fa2 Move the Ethereum Action machine to its own file 2024-09-19 23:36:32 -07:00
Luke Parker
18178f3764 Add note on the returned top-level transfers being unordered 2024-09-19 23:36:32 -07:00
Luke Parker
bdc3bda04a Remove ethereum-serai/serai-processor-ethereum-contracts
contracts was smashed out of ethereum-serai. Both have now been smashed into
individual crates.

Creates a TODO directory with left-over test code yet to be moved.
2024-09-19 23:36:32 -07:00
Luke Parker
433beac93a Ethereum SignableTransaction, Eventuality 2024-09-19 23:36:32 -07:00
Luke Parker
8f2a9301cf Don't have the router drop transactions which may have top-level transfers
The router will now match the top-level transfer so it isn't used as the
justification for the InInstruction it's handling. This allows the theoretical
case where a top-level transfer occurs (to any entity) and an internal call
performs a transfer to Serai.

Also uses a JoinSet for fetching transactions' top-level transfers in the ERC20
crate. This does add a dependency on tokio yet improves performance, and it's
scoped under serai-processor (which is always presumed to be tokio-based).
While we could instead import futures for join_all,
https://github.com/smol-rs/futures-lite/issues/6 summarizes why that wouldn't
be a good idea. While we could prefer async-executor over tokio's JoinSet,
JoinSet doesn't share the same issues as FuturesUnordered. That means our
question is solely if we want the async-executor executor or the tokio
executor, when we've already established the Serai processor is always presumed
to be tokio-based.
2024-09-19 23:36:32 -07:00
Luke Parker
d21034c349 Add calls to get the messages to sign for the router 2024-09-19 23:36:32 -07:00
Luke Parker
381495618c Trim dead code 2024-09-19 23:36:32 -07:00
Luke Parker
ee0efe7cde Don't have the Deployer store the deployment block
Also updates how re-entrancy is handled to a more efficient and portable
mechanism.
2024-09-19 23:36:32 -07:00
Luke Parker
7feb7aed22 Hash the message before the challenge function in the Schnorr contract
Slightly more efficient.
2024-09-19 23:36:32 -07:00
Luke Parker
cc75a92641 Smash out the router library 2024-09-19 23:36:32 -07:00
Luke Parker
a7d5640642 Smash ERC20 into its own library 2024-09-19 23:36:32 -07:00
Luke Parker
ae61f3d359 forge fmt 2024-09-19 23:36:32 -07:00
Luke Parker
4bcea31c2a Break Ethereum Deployer into crate 2024-09-19 23:36:32 -07:00
Luke Parker
eb9bce6862 Remove OutInstruction's data field
It makes sense for networks which support arbitrary data to do as part of their
address. This reduces the ability to perform DoSs, achieves better performance,
and better uses the type system (as now networks we don't support data on don't
have a data field).

Updates the Ethereum address definition in serai-client accordingly
2024-09-19 23:36:32 -07:00
Luke Parker
39be23d807 Remove artifacts for serai-processor-ethereum-contracts 2024-09-19 23:36:32 -07:00
Luke Parker
3f0f4d520d Remove the Sandbox contract
If instead of intaking calls, we intake code, we can deploy a fresh contract
which makes arbitrary calls *without* attempting to build our abstraction
layer over the concept.

This should have the same gas costs, as we still have one contract deployment.
The new contract only has a constructor, so it should have no actual code and
beat the Sandbox in that regard? We do have to call into ourselves to meter the
gas, yet we already had to call into the deployed Sandbox to achieve that.

Also re-defines the OutInstruction to include tokens, implements
OutInstruction-specified gas amounts, bumps the Solidity version, and other
such misc changes.
2024-09-19 23:36:32 -07:00
Luke Parker
80ca2b780a Add tests for the premise of the Schnorr contract to the Schnorr crate 2024-09-19 23:36:32 -07:00
Luke Parker
0813351f1f OUT_DIR > artifacts 2024-09-19 23:36:32 -07:00
Luke Parker
a38d135059 rust-toolchain 1.81 2024-09-19 23:36:32 -07:00
Luke Parker
67f9f76fdf Remove publish = false 2024-09-19 23:36:32 -07:00
Luke Parker
1c5bc2259e Dedicated crate for the Schnorr contract 2024-09-19 23:36:32 -07:00
Luke Parker
bdf89f5350 Add dedicated crate for building Solidity contracts 2024-09-19 23:36:32 -07:00
Luke Parker
239127aae5 Add crate for the Ethereum contracts 2024-09-19 23:36:32 -07:00
Luke Parker
d9543bee40 Move ethereum-serai under the processor
It isn't generally usable and should be directly integrated at this point.
2024-09-19 23:36:32 -07:00
Luke Parker
8746b54a43 Don't use a different address for DAI in test
anvil will let us deploy to the existing address.
2024-09-19 23:36:32 -07:00
Luke Parker
7761798a78 Outline the Ethereum processor
This was only half-finished to begin with, unfortunately...
2024-09-19 23:36:32 -07:00
Luke Parker
72a18bf8bb Smart Contract Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0616085109 Monero Planner
Finishes the Monero processor.
2024-09-19 23:36:32 -07:00
Luke Parker
e23176deeb Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-19 23:36:32 -07:00
Luke Parker
5551521e58 Tighten documentation on Block::number 2024-09-19 23:36:32 -07:00
Luke Parker
a2d9aeaed7 Stub out Scheduler in the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
e1ad897f7e Allow scheduler's creation of transactions to be async and error
I don't love this, but it's the only way to select decoys without using a local
database. While the prior commit added such a databse, the performance of it
presumably wasn't viable, and while TODOs marked the needed improvements, it
was still messy with an immense scope re: any auditing.

The relevant scheduler functions now take `&self` (intentional, as all
mutations should be via the `&mut impl DbTxn` passed). The calls to `&self` are
expected to be completely deterministic (as usual).
2024-09-19 23:36:32 -07:00
Luke Parker
2edc2f3612 Add a database of all Monero outs into the processor
Enables synchronous transaction creation (which requires synchronous decoy
selection).
2024-09-19 23:36:32 -07:00
Luke Parker
e56af7fc51 Monero time_for_block, dust 2024-09-19 23:36:32 -07:00
Luke Parker
947e1067d9 Monero Processor scan, check_for_eventuality_resolutions 2024-09-19 23:36:32 -07:00
Luke Parker
b4e94f3d51 cargo fmt signers/scanner 2024-09-19 23:36:32 -07:00
Luke Parker
1b39138472 Define subaddress indexes to use
(1, 0) is the external address. (2, *) are the internal addresses.
2024-09-19 23:36:32 -07:00
Luke Parker
e78236276a Remove async-trait from processor/
Part of https://github.com/serai-dex/issues/607.
2024-09-19 23:36:32 -07:00
Luke Parker
2c4c33e632 Misc continuances on the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
02409c5735 Correct Multisig Rotation to use WINDOW_LENGTH where proper 2024-09-19 23:36:32 -07:00
Luke Parker
f2cf03cedf Monero processor primitives 2024-09-19 23:36:32 -07:00
Luke Parker
0d4c8cf032 Use a local DB channel for sending to the message-queue
The provided message-queue queue functions runs unti it succeeds. This means
sending to the message-queue will no longer potentially block for arbitrary
amount of times as sending messages is just writing them to a DB.
2024-09-19 23:36:32 -07:00
Luke Parker
b6811f9015 serai-processor-bin
Moves the coordinator loop out of serai-bitcoin-processor, completing it.

Fixes a potential race condition in the message-queue regarding multiple
sockets sending messages at once.
2024-09-19 23:36:32 -07:00
Luke Parker
fcd5fb85df Add binary search to find the block to start scanning from 2024-09-19 23:36:32 -07:00
Luke Parker
3ac0265f07 Add section documenting the safety of txindex upon reorganizations 2024-09-19 23:36:32 -07:00
Luke Parker
9b8c8f8231 Misc tidying of serai-db calls 2024-09-19 23:36:32 -07:00
Luke Parker
59fa49f750 Continue filling out main loop
Adds generics to the db_channel macro, fixes the bug where it needed at least
one key.
2024-09-19 23:36:32 -07:00
Luke Parker
723f529659 Note better message structure in messages 2024-09-19 23:36:32 -07:00
Luke Parker
73af09effb Add note to signers on reducing disk IO 2024-09-19 23:36:32 -07:00
Luke Parker
4054e44471 Start on the new processor main loop 2024-09-19 23:36:32 -07:00
Luke Parker
a8159e9070 Bitcoin Key Gen 2024-09-19 23:36:32 -07:00
Luke Parker
b61ba9d1bb Adjust Bitcoin processor layout 2024-09-19 23:36:32 -07:00
Luke Parker
776cbbb9a4 Misc changes in response to prior two commits 2024-09-19 23:36:32 -07:00
Luke Parker
76a3f3ec4b Add an anyone-can-pay output to every Bitcoin transaction
Resolves #284.
2024-09-19 23:36:32 -07:00
Luke Parker
93c7d06684 Implement presumed_origin
Before we yield a block for scanning, we save all of the contained script
public keys. Then, when we want the address credited for creating an output,
we read the script public key of the spent output from the database.

Fixes #559.
2024-09-19 23:36:32 -07:00
Luke Parker
4cb838e248 Bitcoin processor lib.rs -> main.rs 2024-09-19 23:36:32 -07:00
Luke Parker
c988b7cdb0 Bitcoin TransactionPublisher 2024-09-19 23:36:32 -07:00
Luke Parker
017aab2258 Satisfy Scheduler for Bitcoin 2024-09-19 23:36:32 -07:00
Luke Parker
ba3a6f9e91 Bitcoin ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
e36b671f37 Remove bound that WINDOW_LENGTH < CONFIRMATIONS
It's unnecessary and not valuable.
2024-09-19 23:36:32 -07:00
Luke Parker
2d4b775b6e Add bitcoin Block trait impl 2024-09-19 23:36:32 -07:00
Luke Parker
247cc8f0cc Bitcoin Output/Transaction definitions 2024-09-19 23:36:32 -07:00
Luke Parker
0ccf71df1e Remove old signer impls 2024-09-19 23:36:32 -07:00
Luke Parker
8aba71b9c4 Add CosignerTask to signers, completing it 2024-09-19 23:36:32 -07:00
Luke Parker
46c12c0e66 SlashReport signing and signature publication 2024-09-19 23:36:32 -07:00
Luke Parker
3cc7b49492 Strongly type SlashReport, populate cosign/slash report tasks with work 2024-09-19 23:36:32 -07:00
Luke Parker
0078858c1c Tidy messages, publish all Batches to the coordinator
Prior, we published SignedBatches, yet Batches are necessary for auditing
purposes.
2024-09-19 23:36:32 -07:00
Luke Parker
a3cb514400 Have the coordinator task publish Batches 2024-09-19 23:36:32 -07:00
Luke Parker
ed0221d804 Add BatchSignerTask
Uses a wrapper around AlgorithmMachine Schnorrkel to let the message be &[].
2024-09-19 23:36:32 -07:00
Luke Parker
4152bcacb2 Replace scanner's BatchPublisher with a pair of DB channels 2024-09-19 23:36:32 -07:00
Luke Parker
f07ec7bee0 Route the coordinator, fix race conditions in the signers library 2024-09-19 23:36:32 -07:00
Luke Parker
7484eadbbb Expand task management
These extensions are necessary for the signers task management.
2024-09-19 23:36:32 -07:00
Luke Parker
59ff944152 Work on the higher-level signers API 2024-09-19 23:36:32 -07:00
Luke Parker
8f848b1abc Tidy transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
100c80be9f Finish transaction signing task with TX rebroadcast code 2024-09-19 23:36:32 -07:00
Luke Parker
a353f9e2da Further work on transaction signing 2024-09-19 23:36:32 -07:00
Luke Parker
b62fc3a1fa Minor work on the transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
8380653855 Add empty serai-processor-signers library
This will replace the signers still in the monolithic Processor binary.
2024-09-19 23:36:32 -07:00
Luke Parker
b50b889918 Split processor into bitcoin-processor, ethereum-processor, monero-processor 2024-09-19 23:36:32 -07:00
Luke Parker
d570c1d277 Move additional_key.rs to serai-processor-view-keys
I don't love this. I wanted to simply add this function to `processor/key-gen`,
but then anyone who wants a view key needs to pull in Bulletproofs which is a
mess of code. They'd also be subject to an AGPL licensed library.

This is so small it should be a primitive elsewhere, yet there is no primitives
library eligible. Maybe serai-client since that has the code to make
transactions to Serai (and will have this as a dependency)? Except then the
processor has to import serai-client when this rewrite removed it as a
dependency.
2024-09-19 23:36:32 -07:00
Luke Parker
2da24506a2 Remove vast swaths of legacy code in the processor 2024-09-19 23:36:32 -07:00
Luke Parker
6e9cb74022 Add non-transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0c1aec29bb Finish routing output flushing
Completes the transaction-chaining scheduler.
2024-09-19 23:36:32 -07:00
Luke Parker
653ead1e8c Finish the tree logic in the transaction-chaining scheduler
Also completes the DB functions, makes Scheduler never instantiated, and
ensures tree roots have change outputs.
2024-09-19 23:36:32 -07:00
Luke Parker
8ff019265f Near-complete version of the tree algorithm in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0601d47789 Work on the tree logic in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
ebef38d93b Ensure the transaction-chaining scheduler doesn't accumulate the same output multiple times 2024-09-19 23:36:32 -07:00
Luke Parker
75b4707002 Add input aggregation in the transaction-chaining scheduler
Also handles some other misc in it.
2024-09-19 23:36:32 -07:00
Luke Parker
3c787e005f Fix bug in the scanner regarding forwarded output amounts
We'd report the amount originally received, minus 2x the cost to aggregate,
regardless the amount successfully forwarded. We should've reduced to the
amount successfully forwarded, if it was smaller, in case the cost to
forward exceeded the aggregation cost.
2024-09-19 23:36:32 -07:00
Luke Parker
f11a6b4ff1 Better document the forwarded output flow 2024-09-19 23:36:32 -07:00
Luke Parker
fadc88d2ad Add scheduler-primitives
The main benefit is whatever scheduler is in use, we now have a single API to
receive TXs to sign (which is of value to the TX signer crate we'll inevitably
build).
2024-09-19 23:36:32 -07:00
Luke Parker
c88ebe985e Outline of the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
6deb60513c Expand primitives/scanner with niceties needed for the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bd277e7032 Add processor/scheduler/utxo/primitives
Includes the necessary signing functions and the fee amortization logic.

Moves transaction-chaining to utxo/transaction-chaining.
2024-09-19 23:36:32 -07:00
Luke Parker
fc765bb9e0 Add crate for the transaction-chaining Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
13b74195f7 Don't have acknowledge_batch immediately run
`acknowledge_batch` can only be run if we know what the Batch should be. If we
don't know what the Batch should be, we have to block until we do.
Specifically, we need the block number associated with the Batch.

Instead of blocking over the Scanner API, the Scanner API now solely queues
actions. A new task intakes those actions once we can. This ensures we can
intake the entire Substrate chain, even if our daemon for the external network
is stalled at its genesis block.

All of this for the block number alone seems ridiculous. To go from the block
hash in the Batch to the block number without this task, we'd at least need the
index task to be up to date (still requiring blocking or an API returning
ephemeral errors).
2024-09-19 23:36:32 -07:00
Luke Parker
f21838e0d5 Replace acknowledge_block with acknowledge_batch 2024-09-19 23:36:32 -07:00
Luke Parker
76cbe6cf1e Have acknowledge_block take in the results of the InInstructions executed
If any failed, the scanner now creates a Burn for the return.
2024-09-19 23:36:32 -07:00
Luke Parker
5999f5d65a Route the DB w.r.t. forwarded outputs' information 2024-09-19 23:36:32 -07:00
Luke Parker
d429a0bae6 Remove unused ID -> number lookup 2024-09-19 23:36:32 -07:00
Luke Parker
775824f373 Impl ScanData serialization in the DB 2024-09-19 23:36:32 -07:00
Luke Parker
41a74cb513 Check a queued key has never been queued before
Re-queueing should only happen with a malicious supermajority and breaks
indexing by the key.
2024-09-19 23:36:32 -07:00
Luke Parker
e26da1ec34 Have the Eventuality task drop outputs which aren't ours and aren't worth it to aggregate
We could drop these entirely, yet there's some degree of utility to be able to
add coins to Serai in this manner.
2024-09-19 23:36:32 -07:00
Luke Parker
7266e7f7ea Add note on why LifetimeStage is monotonic 2024-09-19 23:36:32 -07:00
Luke Parker
a8b9b7bad3 Add sanity checks we haven't prior reported an InInstruction for/accumulated an output 2024-09-19 23:36:32 -07:00
Luke Parker
2ca7fccb08 Pass the lifetime information to the scheduler
Enables it to decide which keys to use for fulfillment/change.
2024-09-19 23:36:32 -07:00
Luke Parker
4f6d91037e Call flush_key 2024-09-19 23:36:32 -07:00
Luke Parker
8db76ed67c Add key management to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
920303e1b4 Add helper to intake Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
9f4b28e5ae Clarify output-to-self to output-to-Serai
There's only the requirement it's to an active key which is being reported for.
2024-09-19 23:36:32 -07:00
Luke Parker
f9d02d43c2 Route burns through the scanner 2024-09-19 23:36:32 -07:00
Luke Parker
8ac501028d Add API to publish Batches with
This doesn't have to be abstract, we can generate the message and use the
message-queue API, yet this should help with testing.
2024-09-19 23:36:32 -07:00
Luke Parker
612c67c537 Cache the cost to aggregate 2024-09-19 23:36:32 -07:00
Luke Parker
04a971a024 Fill in various DB functions 2024-09-19 23:36:32 -07:00
Luke Parker
738636c238 Have Scanner::new spawn tasks 2024-09-19 23:36:32 -07:00
Luke Parker
65f3f48517 Add ReportDb 2024-09-19 23:36:32 -07:00
Luke Parker
7cc07d64d1 Make report.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
fdfe520f9d Add ScanDb 2024-09-19 23:36:32 -07:00
Luke Parker
77ef25416b Make scan.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7c1025dbcb Implement key retiry 2024-09-19 23:36:32 -07:00
Luke Parker
a771fbe1c6 Logs, documentation, misc 2024-09-19 23:36:32 -07:00
Luke Parker
9cebdf7c68 Add sorts for safety even upon non-determinism 2024-09-19 23:36:32 -07:00
Luke Parker
75251f04b4 Use a channel for the InInstructions
It's still unclear how we'll handle refunding failed InInstructions at this
time. Presumably, extending the InInstruction channel with the associated
output ID?
2024-09-19 23:36:32 -07:00
Luke Parker
6196642beb Add a DbChannel between scan and eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
2bddf00222 Don't expose IndexDb throughout the crate 2024-09-19 23:36:32 -07:00
Luke Parker
9ab8ba0215 Add dedicated Eventuality DB and stub missing fns 2024-09-19 23:36:32 -07:00
Luke Parker
33e0c85f34 Make Eventuality a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
1e8f4e6156 Make a dedicated IndexDb 2024-09-19 23:36:32 -07:00
Luke Parker
66f3428051 Make index a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7e71840822 Add helper methods
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be
sorted, meaning they aren't subject to implicit order/may be non-deterministic
(such as if handled by a threadpool).
2024-09-19 23:36:32 -07:00
Luke Parker
b65dbacd6a Move ContinuallyRan into primitives
I'm unsure where else it'll be used within the processor, yet it's generally
useful and I don't want to make a dedicated crate yet.
2024-09-19 23:36:32 -07:00
Luke Parker
2fcd9530dd Add a callback to accumulate outputs and return the new Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
379780a3c9 Flesh out eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
945f31dfc7 Have the scan flag blocks with change/branch/forwarded as notable 2024-09-19 23:36:32 -07:00
Luke Parker
d5d1fc3eea Flesh out report task 2024-09-19 23:36:32 -07:00
Luke Parker
fd12cc0213 Finish scan task 2024-09-19 23:36:32 -07:00
Luke Parker
ce805c8cc8 Correct compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
bc0cc5a754 Decide flow between scan/eventuality/report
Scan now only handles External outputs, with an associated essay going over
why. Scan directly creates the InInstruction (prior planned to be done in
Report), and Eventuality is declared to end up yielding the outputs.

That will require making the Eventuality flow two-stage. One stage to evaluate
existing Eventualities and yield outputs, and one stage to incorporate new
Eventualities before advancing the scan window.
2024-09-19 23:36:32 -07:00
Luke Parker
f2ee4daf43 Add Eventuality back to processor primitives
Also splits crate into modules.
2024-09-19 23:36:32 -07:00
Luke Parker
4e29678799 Add bounds for the eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
74d3075dae Document expectations on Eventuality task and correct code determining the block safe to scan/report 2024-09-19 23:36:32 -07:00
Luke Parker
155ad48f4c Handle dust 2024-09-19 23:36:32 -07:00
Luke Parker
951872b026 Differentiate BlockHeader from Block 2024-09-19 23:36:32 -07:00
Luke Parker
2b47feafed Correct misc compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
a2717d73f0 Flesh out new scanner a bit more
Adds the task to mark blocks safe to scan, and outlines the task to report
blocks.
2024-09-19 23:36:32 -07:00
Luke Parker
8763ef23ed Definition and delineation of tasks within the scanner
Also defines primitives for the processor.
2024-09-19 23:36:32 -07:00
Luke Parker
57a0ba966b Extend serai-db with support for generic keys/values 2024-09-19 23:36:32 -07:00
Luke Parker
e843b4a2a0 Move scanner.rs to scanner/lib.rs 2024-09-19 23:36:32 -07:00
Luke Parker
2f3bd7a02a Cleanup DB handling a bit in key-gen/attempt-manager 2024-09-19 23:36:32 -07:00
Luke Parker
1e8a9ec5bd Smash out the signer
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
2024-09-19 23:36:32 -07:00
Luke Parker
2f29c91d30 Smash key-gen out of processor
Resolves some bad assumptions made regarding keys being unique or not.
2024-09-19 23:36:32 -07:00
Luke Parker
f3b91bd44f Smash key-gen into independent crate 2024-09-19 23:36:32 -07:00
Luke Parker
e4e4245ee3 One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00
845 changed files with 52241 additions and 47045 deletions

2
.github/LICENSE vendored
View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: "27.0" default: "30.0"
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Bitcoin Daemon Cache - name: Bitcoin Daemon Cache
id: cache-bitcoind id: cache-bitcoind
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: bitcoin.tar.gz path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
@@ -37,4 +37,4 @@ runs:
- name: Bitcoin Regtest Daemon - name: Bitcoin Regtest Daemon
shell: bash shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -daemon run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -txindex -daemon

View File

@@ -52,9 +52,9 @@ runs:
- name: Install solc - name: Install solc
shell: bash shell: bash
run: | run: |
cargo +1.89 install svm-rs --version =0.5.18 cargo +1.91.1 install svm-rs --version =0.5.21
svm install 0.8.26 svm install 0.8.29
svm use 0.8.26 svm use 0.8.29
- name: Remove preinstalled Docker - name: Remove preinstalled Docker
shell: bash shell: bash
@@ -75,11 +75,8 @@ runs:
if: runner.os == 'Linux' if: runner.os == 'Linux'
- name: Install rootless Docker - name: Install rootless Docker
uses: docker/setup-docker-action@b60f85385d03ac8acfca6d9996982511d8620a19 uses: docker/setup-docker-action@e61617a16c407a86262fb923c35a616ddbe070b3 # 4.6.0
with: with:
rootless: true rootless: true
set-host: true set-host: true
if: runner.os == 'Linux' if: runner.os == 'Linux'
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.3.4 default: v0.18.4.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Wallet RPC Cache - name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc id: cache-monero-wallet-rpc
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: monero-wallet-rpc path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.3.4 default: v0.18.4.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Daemon Cache - name: Monero Daemon Cache
id: cache-monerod id: cache-monerod
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: /usr/bin/monerod path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,12 +5,12 @@ inputs:
monero-version: monero-version:
description: "Monero version to download and run as a regtest node" description: "Monero version to download and run as a regtest node"
required: false required: false
default: v0.18.3.4 default: v0.18.4.4
bitcoin-version: bitcoin-version:
description: "Bitcoin version to download and run as a regtest node" description: "Bitcoin version to download and run as a regtest node"
required: false required: false
default: "27.1" default: "30.0"
runs: runs:
using: "composite" using: "composite"
@@ -19,9 +19,9 @@ runs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install Foundry - name: Install Foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773 uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with: with:
version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9 version: v1.5.0
cache: false cache: false
- name: Run a Monero Regtest Node - name: Run a Monero Regtest Node

View File

@@ -1 +1 @@
nightly-2025-11-01 nightly-2025-12-01

View File

@@ -17,7 +17,7 @@ jobs:
test-common: test-common:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -30,4 +30,5 @@ jobs:
-p patchable-async-sleep \ -p patchable-async-sleep \
-p serai-db \ -p serai-db \
-p serai-env \ -p serai-env \
-p serai-task \
-p simple-request -p simple-request

View File

@@ -31,7 +31,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -19,7 +19,7 @@ jobs:
test-crypto: test-crypto:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -35,12 +35,14 @@ jobs:
-p ciphersuite-kp256 \ -p ciphersuite-kp256 \
-p multiexp \ -p multiexp \
-p schnorr-signatures \ -p schnorr-signatures \
-p dleq \ -p prime-field \
-p short-weierstrass \
-p secq256k1 \
-p embedwards25519 \
-p dkg \ -p dkg \
-p dkg-recovery \ -p dkg-recovery \
-p dkg-dealer \ -p dkg-dealer \
-p dkg-promote \
-p dkg-musig \ -p dkg-musig \
-p dkg-pedpop \ -p dkg-evrf \
-p modular-frost \ -p modular-frost \
-p frost-schnorrkel -p frost-schnorrkel

View File

@@ -9,16 +9,10 @@ jobs:
name: Run cargo deny name: Run cargo deny
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.89 install cargo-deny --version =0.18.3 run: cargo +1.91.1 install cargo-deny --version =0.18.6
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -13,7 +13,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use - name: Get nightly version to use
id: nightly id: nightly
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install nightly rust - name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-src -c clippy run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy
- name: Run Clippy - name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -43,16 +43,10 @@ jobs:
deny: deny:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.89 install cargo-deny --version =0.18.4 run: cargo +1.91.1 install cargo-deny --version =0.18.6
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph
@@ -60,7 +54,7 @@ jobs:
fmt: fmt:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use - name: Get nightly version to use
id: nightly id: nightly
@@ -73,11 +67,137 @@ jobs:
- name: Run rustfmt - name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with:
version: v1.5.0
cache: false
- name: Run forge fmt
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol")
machete: machete:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify all dependencies are in use - name: Verify all dependencies are in use
run: | run: |
cargo +1.89 install cargo-machete --version =0.8.0 cargo +1.91.1 install cargo-machete --version =0.9.1
cargo +1.89 machete cargo +1.91.1 machete
msrv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify claimed `rust-version`
shell: bash
run: |
cargo +1.91.1 install cargo-msrv --version =0.18.4
function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the
# directory called from.
return_to=$(pwd)
echo "Checking $1"
cd $1
# We then find the existing `rust-version` using `grep` (for the right line) and then a
# regex (to strip to just the major and minor version).
existing=$(cat ./Cargo.toml | grep "rust-version" | grep -Eo "[0-9]+\.[0-9]+")
# We then backup the `Cargo.toml`, allowing us to restore it after, saving time on future
# MSRV checks (as they'll benefit from immediately exiting if the queried version is less
# than the declared MSRV).
mv ./Cargo.toml ./Cargo.toml.bak
# We then use an inverted (`-v`) grep to remove the existing `rust-version` from the
# `Cargo.toml`, as required because else earlier versions of Rust won't even attempt to
# compile this crate.
cat ./Cargo.toml.bak | grep -v "rust-version" > Cargo.toml
# We then find the actual `rust-version` using `cargo-msrv` (again stripping to just the
# major and minor version).
actual=$(cargo msrv find --output-format minimal | grep -Eo "^[0-9]+\.[0-9]+")
# Finally, we compare the two.
echo "Declared rust-version: $existing"
echo "Actual rust-version: $actual"
[ $existing == $actual ]
result=$?
# Restore the original `Cargo.toml`.
rm Cargo.toml
mv ./Cargo.toml.bak ./Cargo.toml
# Return to the directory called from and return the result.
cd $return_to
return $result
}
# Check each member of the workspace
function check_workspace {
# Get the members array from the workspace's `Cargo.toml`
cargo_toml_lines=$(cat ./Cargo.toml | wc -l)
# Keep all lines after the start of the array, then keep all lines before the next "]"
members=$(cat Cargo.toml | grep "members\ \=\ \[" -m1 -A$cargo_toml_lines | grep "]" -m1 -B$cargo_toml_lines)
# Parse out any comments, whitespace, including comments post-fixed on the same line as an entry
# We accomplish the latter by pruning all characters after the entry's ","
members=$(echo "$members" | grep -Ev "^[[:space:]]*(#|$)" | awk -F',' '{print $1","}')
# Replace the first line, which was "members = [" and is now "members = [,", with "["
members=$(echo "$members" | sed "1s/.*/\[/")
# Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline
members=$(echo "$members" | grep -v "networks/ethereum/relayer\"")
members=$(echo "$members" | grep -v "message-queue\"")
members=$(echo "$members" | grep -v "processor/bin\"")
members=$(echo "$members" | grep -v "processor/bitcoin\"")
members=$(echo "$members" | grep -v "processor/ethereum\"")
members=$(echo "$members" | grep -v "processor/monero\"")
members=$(echo "$members" | grep -v "coordinator\"")
members=$(echo "$members" | grep -v "substrate/runtime\"")
members=$(echo "$members" | grep -v "substrate/node\"")
members=$(echo "$members" | grep -v "orchestration\"")
# Don't check the tests
members=$(echo "$members" | grep -v "mini\"")
members=$(echo "$members" | grep -v "tests/")
# Remove the trailing comma by replacing the last line's "," with ""
members=$(echo "$members" | sed "$(($(echo "$members" | wc -l) - 1))s/\,//")
echo $members | jq -r ".[]" | while read -r member; do
check_msrv $member
correct=$?
if [ $correct -ne 0 ]; then
return $correct
fi
done
}
check_workspace
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither
run: |
python3 -m pip install slither-analyzer==0.11.3
slither ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol
cp networks/ethereum/schnorr/contracts/Schnorr.sol processor/ethereum/router/contracts/
cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/
cd processor/ethereum/router/contracts
slither Router.sol

View File

@@ -27,7 +27,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -17,7 +17,7 @@ jobs:
test-common: test-common:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly name: Update nightly
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
with: with:
submodules: "recursive" submodules: "recursive"

View File

@@ -21,7 +21,7 @@ jobs:
test-networks: test-networks:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Test Dependencies - name: Test Dependencies
uses: ./.github/actions/test-dependencies uses: ./.github/actions/test-dependencies
@@ -30,6 +30,7 @@ jobs:
run: | run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \ -p bitcoin-serai \
-p build-solidity-contracts \
-p ethereum-schnorr-contract \
-p alloy-simple-request-transport \ -p alloy-simple-request-transport \
-p ethereum-serai \
-p serai-ethereum-relayer \ -p serai-ethereum-relayer \

View File

@@ -23,13 +23,23 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install RISC-V Toolchain - name: Install RISC-V Toolchain
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf run: |
sudo apt update
sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal --component rust-src --target riscv32imac-unknown-none-elf
- name: Verify no-std builds - name: Verify no-std builds
run: CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf -p serai-no-std-tests run: |
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core -p serai-no-std-tests
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core,alloc -p serai-no-std-tests --features "alloc"

View File

@@ -46,16 +46,16 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Setup Ruby - name: Setup Ruby
uses: ruby/setup-ruby@44511735964dcb71245e7e55f72539531f7bc0eb uses: ruby/setup-ruby@8aeb6ff8030dd539317f8e1769a044873b56ea71 # 1.268.0
with: with:
bundler-cache: true bundler-cache: true
cache-version: 0 cache-version: 0
working-directory: "${{ github.workspace }}/docs" working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages - name: Setup Pages
id: pages id: pages
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # 5.0.0
- name: Build with Jekyll - name: Build with Jekyll
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}" run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env: env:
@@ -74,7 +74,7 @@ jobs:
mv target/doc docs/_site/rust mv target/doc docs/_site/rust
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # 4.0.0
with: with:
path: "docs/_site/" path: "docs/_site/"
@@ -88,4 +88,4 @@ jobs:
steps: steps:
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # 4.0.5

View File

@@ -31,7 +31,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -27,7 +27,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -29,7 +29,7 @@ jobs:
test-infra: test-infra:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -39,9 +39,34 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \ -p serai-message-queue \
-p serai-processor-messages \ -p serai-processor-messages \
-p serai-processor \ -p serai-processor-key-gen \
-p serai-processor-view-keys \
-p serai-processor-frost-attempt-manager \
-p serai-processor-primitives \
-p serai-processor-scanner \
-p serai-processor-scheduler-primitives \
-p serai-processor-utxo-scheduler-primitives \
-p serai-processor-utxo-scheduler \
-p serai-processor-transaction-chaining-scheduler \
-p serai-processor-smart-contract-scheduler \
-p serai-processor-signers \
-p serai-processor-bin \
-p serai-bitcoin-processor \
-p serai-processor-ethereum-primitives \
-p serai-processor-ethereum-test-primitives \
-p serai-processor-ethereum-deployer \
-p serai-processor-ethereum-router \
-p serai-processor-ethereum-erc20 \
-p serai-ethereum-processor \
-p serai-monero-processor \
-p tendermint-machine \ -p tendermint-machine \
-p tributary-chain \ -p tributary-sdk \
-p serai-cosign-types \
-p serai-cosign \
-p serai-coordinator-substrate \
-p serai-coordinator-tributary \
-p serai-coordinator-p2p \
-p serai-coordinator-libp2p-p2p \
-p serai-coordinator \ -p serai-coordinator \
-p serai-orchestrator \ -p serai-orchestrator \
-p serai-docker-tests -p serai-docker-tests
@@ -49,7 +74,7 @@ jobs:
test-substrate: test-substrate:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -58,31 +83,33 @@ jobs:
run: | run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \ -p serai-primitives \
-p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \ -p serai-abi \
-p substrate-median \
-p serai-core-pallet \
-p serai-coins-pallet \
-p serai-validator-sets-pallet \
-p serai-signals-pallet \
-p serai-dex-pallet \
-p serai-genesis-liquidity-pallet \
-p serai-economic-security-pallet \
-p serai-emissions-pallet \
-p serai-in-instructions-pallet \
-p serai-runtime \ -p serai-runtime \
-p serai-node -p serai-node
-p serai-substrate-tests
test-serai-client: test-serai-client:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run Tests - name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-serai
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-bitcoin
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-ethereum
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-monero
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

3
.gitignore vendored
View File

@@ -2,11 +2,10 @@ target
# Don't commit any `Cargo.lock` which aren't the workspace's # Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock Cargo.lock
!./Cargo.lock !/Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't # Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile !orchestration/runtime/Dockerfile
.test-logs .test-logs

5164
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +1,12 @@
[workspace] [workspace]
resolver = "2" resolver = "2"
members = [ members = [
# std patches
"patches/matches",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims", "common/std-shims",
"common/zalloc", "common/zalloc",
"common/patchable-async-sleep", "common/patchable-async-sleep",
"common/db", "common/db",
"common/env", "common/env",
"common/task",
"common/request", "common/request",
"crypto/transcript", "crypto/transcript",
@@ -24,62 +18,87 @@ members = [
"crypto/ciphersuite/kp256", "crypto/ciphersuite/kp256",
"crypto/multiexp", "crypto/multiexp",
"crypto/schnorr", "crypto/schnorr",
"crypto/dleq",
"crypto/prime-field",
"crypto/short-weierstrass",
"crypto/secq256k1",
"crypto/embedwards25519",
"crypto/dkg", "crypto/dkg",
"crypto/dkg/recovery", "crypto/dkg/recovery",
"crypto/dkg/dealer", "crypto/dkg/dealer",
"crypto/dkg/promote",
"crypto/dkg/musig", "crypto/dkg/musig",
"crypto/dkg/pedpop", "crypto/dkg/evrf",
"crypto/frost", "crypto/frost",
"crypto/schnorrkel", "crypto/schnorrkel",
"networks/bitcoin", "networks/bitcoin",
"networks/ethereum/build-contracts",
"networks/ethereum/schnorr",
"networks/ethereum/alloy-simple-request-transport", "networks/ethereum/alloy-simple-request-transport",
"networks/ethereum",
"networks/ethereum/relayer", "networks/ethereum/relayer",
"message-queue", "message-queue",
"processor/messages", "processor/messages",
"processor",
"coordinator/tributary/tendermint", "processor/key-gen",
"processor/view-keys",
"processor/frost-attempt-manager",
"processor/primitives",
"processor/scanner",
"processor/scheduler/primitives",
"processor/scheduler/utxo/primitives",
"processor/scheduler/utxo/standard",
"processor/scheduler/utxo/transaction-chaining",
"processor/scheduler/smart-contract",
"processor/signers",
"processor/bin",
"processor/bitcoin",
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/erc20",
"processor/ethereum/router",
"processor/ethereum",
"processor/monero",
"coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk",
"coordinator/cosign/types",
"coordinator/cosign",
"coordinator/substrate",
"coordinator/tributary", "coordinator/tributary",
"coordinator/p2p",
"coordinator/p2p/libp2p",
"coordinator", "coordinator",
"substrate/primitives", "substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi", "substrate/abi",
"substrate/median",
"substrate/core",
"substrate/coins",
"substrate/validator-sets",
"substrate/signals",
"substrate/dex",
"substrate/genesis-liquidity",
"substrate/economic-security",
"substrate/emissions",
"substrate/in-instructions",
"substrate/runtime", "substrate/runtime",
"substrate/node", "substrate/node",
"substrate/client/serai",
"substrate/client/bitcoin",
"substrate/client/ethereum",
"substrate/client/monero",
"substrate/client", "substrate/client",
"orchestration", "orchestration",
@@ -90,48 +109,96 @@ members = [
"tests/docker", "tests/docker",
"tests/message-queue", "tests/message-queue",
"tests/processor", # TODO "tests/processor",
"tests/coordinator", # TODO "tests/coordinator",
"tests/full-stack", "tests/substrate",
# TODO "tests/full-stack",
"tests/reproducible-runtime", "tests/reproducible-runtime",
] ]
[profile.dev.package]
# Always compile Monero (and a variety of dependencies) with optimizations due # Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs # to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 } subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
sha3 = { opt-level = 3 }
blake2 = { opt-level = 3 }
ff = { opt-level = 3 } ff = { opt-level = 3 }
group = { opt-level = 3 } group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 } crypto-bigint = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 } dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 } multiexp = { opt-level = 3 }
monero-io = { opt-level = 3 }
monero-primitives = { opt-level = 3 }
monero-ed25519 = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs-generators = { opt-level = 3 }
monero-bulletproofs = {opt-level = 3 }
monero-oxide = { opt-level = 3 } monero-oxide = { opt-level = 3 }
# Always compile the eVRF DKG tree with optimizations as well
secp256k1 = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
# revm also effectively requires being built with optimizations
revm = { opt-level = 3 }
revm-bytecode = { opt-level = 3 }
revm-context = { opt-level = 3 }
revm-context-interface = { opt-level = 3 }
revm-database = { opt-level = 3 }
revm-database-interface = { opt-level = 3 }
revm-handler = { opt-level = 3 }
revm-inspector = { opt-level = 3 }
revm-interpreter = { opt-level = 3 }
revm-precompile = { opt-level = 3 }
revm-primitives = { opt-level = 3 }
revm-state = { opt-level = 3 }
[profile.release] [profile.release]
panic = "unwind" panic = "unwind"
overflow-checks = true overflow-checks = true
[patch.crates-io] [patch.crates-io]
# Dependencies from monero-oxide which originate from within our own tree # Point to empty crates for crates unused within in our tree
std-shims = { path = "common/std-shims" } alloy-eip2124 = { path = "patches/ethereum/alloy-eip2124" }
simple-request = { path = "common/request" } ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
dalek-ff-group = { path = "crypto/dalek-ff-group" } ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
# Dependencies from monero-oxide which originate from within our own tree, potentially shimmed to account for deviations since publishing
std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" }
flexible-transcript = { path = "crypto/transcript" } flexible-transcript = { path = "crypto/transcript" }
ciphersuite = { path = "patches/ciphersuite" }
dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" } modular-frost = { path = "crypto/frost" }
# Patch due to `std` now including the required functionality
is_terminal_polyfill = { path = "./patches/is_terminal_polyfill" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201 # https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" } lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# These have `std` alternatives
matches = { path = "patches/matches" }
home = { path = "patches/home" }
# directories-next was created because directories was unmaintained # directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained # directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers # The directories author pulls in ridiculously pointless crates and prefers
@@ -140,11 +207,22 @@ home = { path = "patches/home" }
option-ext = { path = "patches/option-ext" } option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" } directories-next = { path = "patches/directories-next" }
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# `jemalloc` conflicts with `mimalloc`, so patch to a `kvdb-rocksdb` which never exposes `jemalloc`
kvdb-rocksdb = { path = "patches/kvdb-rocksdb" }
[workspace.lints.clippy] [workspace.lints.clippy]
uninlined_format_args = "allow" # TODO
unwrap_or_default = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"
unwrap_or_default = "allow"
map_unwrap_or = "allow"
needless_continue = "allow"
borrow_as_ptr = "deny" borrow_as_ptr = "deny"
cast_lossless = "deny" cast_lossless = "deny"
cast_possible_truncation = "deny" cast_possible_truncation = "deny"
@@ -169,14 +247,12 @@ large_stack_arrays = "deny"
linkedlist = "deny" linkedlist = "deny"
macro_use_imports = "deny" macro_use_imports = "deny"
manual_instant_elapsed = "deny" manual_instant_elapsed = "deny"
# TODO manual_let_else = "deny" manual_let_else = "deny"
manual_ok_or = "deny" manual_ok_or = "deny"
manual_string_new = "deny" manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny" match_bool = "deny"
match_same_arms = "deny" match_same_arms = "deny"
missing_fields_in_debug = "deny" missing_fields_in_debug = "deny"
# TODO needless_continue = "deny"
needless_pass_by_value = "deny" needless_pass_by_value = "deny"
ptr_cast_constness = "deny" ptr_cast_constness = "deny"
range_minus_one = "deny" range_minus_one = "deny"
@@ -184,7 +260,9 @@ range_plus_one = "deny"
redundant_closure_for_method_calls = "deny" redundant_closure_for_method_calls = "deny"
redundant_else = "deny" redundant_else = "deny"
string_add_assign = "deny" string_add_assign = "deny"
string_slice = "deny"
unchecked_time_subtraction = "deny" unchecked_time_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny" unnecessary_box_returns = "deny"
unnecessary_join = "deny" unnecessary_join = "deny"
unnecessary_wraps = "deny" unnecessary_wraps = "deny"
@@ -193,20 +271,5 @@ unused_async = "deny"
unused_self = "deny" unused_self = "deny"
zero_sized_map_values = "deny" zero_sized_map_values = "deny"
# TODO: These were incurred when updating Rust as necessary for compilation, yet aren't being fixed
# at this time due to the impacts it'd have throughout the repository (when this isn't actively the
# primary branch, `next` is)
needless_continue = "allow"
needless_lifetimes = "allow"
useless_conversion = "allow"
empty_line_after_doc_comments = "allow"
manual_div_ceil = "allow"
manual_let_else = "allow"
unnecessary_map_or = "allow"
result_large_err = "allow"
unneeded_struct_pattern = "allow"
[workspace.lints.rust] [workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648 unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648
mismatched_lifetime_syntaxes = "allow"
unused_attributes = "allow"
unused_parens = "allow"

View File

@@ -0,0 +1,14 @@
# Trail of Bits Ethereum Contracts Audit, June 2025
This audit included:
- Our Schnorr contract and associated library (/networks/ethereum/schnorr)
- Our Ethereum primitives library (/processor/ethereum/primitives)
- Our Deployer contract and associated library (/processor/ethereum/deployer)
- Our ERC20 library (/processor/ethereum/erc20)
- Our Router contract and associated library (/processor/ethereum/router)
It is encompassing up to commit 4e0c58464fc4673623938335f06e2e9ea96ca8dd.
Please see
https://github.com/trailofbits/publications/blob/30c4fa3ebf39ff8e4d23ba9567344ec9691697b5/reviews/2025-04-serai-dex-security-review.pdf
for the actual report.

View File

@@ -0,0 +1,50 @@
# eVRF DKG
In 2024, the [eVRF paper](https://eprint.iacr.org/2024/397) was published to
the IACR preprint server. Within it was a one-round unbiased DKG and a
one-round unbiased threshold DKG. Unfortunately, both simply describe
communication of the secret shares as 'Alice sends $s_b$ to Bob'. This causes,
in practice, the need for an additional round of communication to occur where
all participants confirm they received their secret shares.
Within Serai, it was posited to use the same premises as the DDH eVRF itself to
achieve a verifiable encryption scheme. This allows the secret shares to be
posted to any 'bulletin board' (such as a blockchain) and for all observers to
confirm:
- A participant participated
- The secret shares sent can be received by the intended recipient so long as
they can access the bulletin board
Additionally, Serai desired a robust scheme (albeit with an biased key as the
output, which is fine for our purposes). Accordingly, our implementation
instantiates the threshold eVRF DKG from the eVRF paper, with our own proposal
for verifiable encryption, with the caller allowed to decide the set of
participants. They may:
- Select everyone, collapsing to the non-threshold unbiased DKG from the eVRF
paper
- Select a pre-determined set, collapsing to the threshold unbaised DKG from
the eVRF paper
- Select a post-determined set (with any solution for the Common Subset
problem), allowing achieving a robust threshold biased DKG
Note that the eVRF paper proposes using the eVRF to sample coefficients yet
this is unnecessary when the resulting key will be biased. Any proof of
knowledge for the coefficients, as necessary for their extraction within the
security proofs, would be sufficient.
MAGIC Grants contracted HashCloak to formalize Serai's proposal for a DKG and
provide proofs for its security. This resulted in
[this paper](<./Security Proofs.pdf>).
Our implementation itself is then built on top of the audited
[`generalized-bulletproofs`](https://github.com/kayabaNerve/monero-oxide/tree/generalized-bulletproofs/audits/crypto/generalized-bulletproofs)
and
[`generalized-bulletproofs-ec-gadgets`](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/fcmps).
Note we do not use the originally premised DDH eVRF yet the one premised on
elliptic curve divisors, the methodology of which is commented on
[here](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/divisors).
Our implementation itself is unaudited at this time however.

Binary file not shown.

View File

@@ -1,13 +1,13 @@
[package] [package]
name = "serai-db" name = "serai-db"
version = "0.1.0" version = "0.1.1"
description = "A simple database trait and backends for it" description = "A simple database trait and backends for it"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/db" repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.65" rust-version = "1.77"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
parity-db = { version = "0.4", default-features = false, optional = true } parity-db = { version = "0.5", default-features = false, features = ["arc"], optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true } rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features] [features]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

8
common/db/README.md Normal file
View File

@@ -0,0 +1,8 @@
# Serai DB
An inefficient, minimal abstraction around databases.
The abstraction offers `get`, `put`, and `del` with helper functions and macros
built on top. Database iteration is not offered, forcing the caller to manually
implement indexing schemes. This ensures wide compatibility across abstracted
databases.

View File

@@ -15,7 +15,7 @@ pub fn serai_db_key(
/// ///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro /// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key, /// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `scale` encodable. The return type is used to auto encode and decode the database /// they must be `borsh` serializable. The return type is used to auto (de)serialize the database
/// value bytes using `borsh`. /// value bytes using `borsh`.
/// ///
/// # Arguments /// # Arguments
@@ -38,32 +38,65 @@ pub fn serai_db_key(
#[macro_export] #[macro_export]
macro_rules! create_db { macro_rules! create_db {
($db_name: ident { ($db_name: ident {
$($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)* $(
$field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => { }) => {
$( $(
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub(crate) struct $field_name; pub(crate) struct $field_name$(
impl $field_name { <$($generic_name: $generic_type),+>
)?$(
(core::marker::PhantomData<($($generic_name),+)>)
)?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> { pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key( $crate::serai_db_key(
stringify!($db_name).as_bytes(), stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(), stringify!($field_name).as_bytes(),
($($arg),*).encode() &borsh::to_vec(&($($arg),*)).unwrap(),
) )
} }
pub(crate) fn set(txn: &mut impl DbTxn $(, $arg: $arg_type)*, data: &$field_type) { pub(crate) fn set(
let key = $field_name::key($($arg),*); txn: &mut impl DbTxn
$(, $arg: $arg_type)*,
data: &$field_type
) {
let key = Self::key($($arg),*);
txn.put(&key, borsh::to_vec(data).unwrap()); txn.put(&key, borsh::to_vec(data).unwrap());
} }
pub(crate) fn get(getter: &impl Get, $($arg: $arg_type),*) -> Option<$field_type> { pub(crate) fn get(
getter.get($field_name::key($($arg),*)).map(|data| { getter: &impl Get,
$($arg: $arg_type),*
) -> Option<$field_type> {
getter.get(Self::key($($arg),*)).map(|data| {
borsh::from_slice(data.as_ref()).unwrap() borsh::from_slice(data.as_ref()).unwrap()
}) })
} }
// Returns a PhantomData of all generic types so if the generic was only used in the value,
// not the keys, this doesn't have unused generic types
#[allow(dead_code)] #[allow(dead_code)]
pub(crate) fn del(txn: &mut impl DbTxn $(, $arg: $arg_type)*) { pub(crate) fn del(
txn.del(&$field_name::key($($arg),*)) txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> core::marker::PhantomData<($($($generic_name),+)?)> {
txn.del(&Self::key($($arg),*));
core::marker::PhantomData
}
pub(crate) fn take(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let key = Self::key($($arg),*);
let res = txn.get(&key).map(|data| borsh::from_slice(data.as_ref()).unwrap());
if res.is_some() {
txn.del(key);
}
res
} }
} }
)* )*
@@ -73,19 +106,30 @@ macro_rules! create_db {
#[macro_export] #[macro_export]
macro_rules! db_channel { macro_rules! db_channel {
($db_name: ident { ($db_name: ident {
$($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)* $($field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => { }) => {
$( $(
create_db! { create_db! {
$db_name { $db_name {
$field_name: ($($arg: $arg_type,)* index: u32) -> $field_type, $field_name: $(<$($generic_name: $generic_type),+>)?(
$($arg: $arg_type,)*
index: u32
) -> $field_type
} }
} }
impl $field_name { impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn send(txn: &mut impl DbTxn $(, $arg: $arg_type)*, value: &$field_type) { pub(crate) fn send(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
, value: &$field_type
) {
// Use index 0 to store the amount of messages // Use index 0 to store the amount of messages
let messages_sent_key = $field_name::key($($arg),*, 0); let messages_sent_key = Self::key($($arg,)* 0);
let messages_sent = txn.get(&messages_sent_key).map(|counter| { let messages_sent = txn.get(&messages_sent_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap()) u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0); }).unwrap_or(0);
@@ -96,19 +140,35 @@ macro_rules! db_channel {
// at the same time // at the same time
let index_to_use = messages_sent + 2; let index_to_use = messages_sent + 2;
$field_name::set(txn, $($arg),*, index_to_use, value); Self::set(txn, $($arg,)* index_to_use, value);
} }
pub(crate) fn try_recv(txn: &mut impl DbTxn $(, $arg: $arg_type)*) -> Option<$field_type> { pub(crate) fn peek(
let messages_recvd_key = $field_name::key($($arg),*, 1); getter: &impl Get
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = getter.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
Self::get(getter, $($arg,)* index_to_read)
}
pub(crate) fn try_recv(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = txn.get(&messages_recvd_key).map(|counter| { let messages_recvd = txn.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap()) u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0); }).unwrap_or(0);
let index_to_read = messages_recvd + 2; let index_to_read = messages_recvd + 2;
let res = $field_name::get(txn, $($arg),*, index_to_read); let res = Self::get(txn, $($arg,)* index_to_read);
if res.is_some() { if res.is_some() {
$field_name::del(txn, $($arg),*, index_to_read); Self::del(txn, $($arg,)* index_to_read);
txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes()); txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes());
} }
res res

View File

@@ -14,26 +14,43 @@ mod parity_db;
#[cfg(feature = "parity-db")] #[cfg(feature = "parity-db")]
pub use parity_db::{ParityDb, new_parity_db}; pub use parity_db::{ParityDb, new_parity_db};
/// An object implementing get. /// An object implementing `get`.
pub trait Get { pub trait Get {
/// Get a value from the database.
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>; fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
} }
/// An atomic database operation. /// An atomic database transaction.
///
/// A transaction is only required to atomically commit. It is not required that two `Get` calls
/// made with the same transaction return the same result, if another transaction wrote to that
/// key.
///
/// If two transactions are created, and both write (including deletions) to the same key, behavior
/// is undefined. The transaction may block, deadlock, panic, overwrite one of the two values
/// randomly, or any other action, at time of write or at time of commit.
#[must_use] #[must_use]
pub trait DbTxn: Send + Get { pub trait DbTxn: Send + Get {
/// Write a value to this key.
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>); fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>);
/// Delete the value from this key.
fn del(&mut self, key: impl AsRef<[u8]>); fn del(&mut self, key: impl AsRef<[u8]>);
/// Commit this transaction.
fn commit(self); fn commit(self);
} }
/// A database supporting atomic operations. /// A database supporting atomic transaction.
pub trait Db: 'static + Send + Sync + Clone + Get { pub trait Db: 'static + Send + Sync + Clone + Get {
/// The type representing a database transaction.
type Transaction<'a>: DbTxn; type Transaction<'a>: DbTxn;
/// Calculate a key for a database entry.
///
/// Keys are separated by the database, the item within the database, and the item's key itself.
fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> { fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
let db_len = u8::try_from(db_dst.len()).unwrap(); let db_len = u8::try_from(db_dst.len()).unwrap();
let dst_len = u8::try_from(item_dst.len()).unwrap(); let dst_len = u8::try_from(item_dst.len()).unwrap();
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat() [[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat()
} }
/// Open a new transaction.
fn txn(&mut self) -> Self::Transaction<'_>; fn txn(&mut self) -> Self::Transaction<'_>;
} }

View File

@@ -11,7 +11,7 @@ use crate::*;
#[derive(PartialEq, Eq, Debug)] #[derive(PartialEq, Eq, Debug)]
pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>); pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>);
impl<'a> Get for MemDbTxn<'a> { impl Get for MemDbTxn<'_> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> { fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
if self.2.contains(key.as_ref()) { if self.2.contains(key.as_ref()) {
return None; return None;
@@ -23,7 +23,7 @@ impl<'a> Get for MemDbTxn<'a> {
.or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned()) .or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned())
} }
} }
impl<'a> DbTxn for MemDbTxn<'a> { impl DbTxn for MemDbTxn<'_> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) { fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.2.remove(key.as_ref()); self.2.remove(key.as_ref());
self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec()); self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec());

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/env"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.60" rust-version = "1.64"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

2
common/env/LICENSE vendored
View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-a
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"] keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021" edition = "2021"
rust-version = "1.70"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2024 Luke Parker Copyright (c) 2024-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,13 +1,13 @@
[package] [package]
name = "simple-request" name = "simple-request"
version = "0.1.0" version = "0.3.0"
description = "A simple HTTP(S) request library" description = "A simple HTTP(S) request library"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-request" repository = "https://github.com/serai-dex/serai/tree/develop/common/request"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"] keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021" edition = "2021"
rust-version = "1.70" rust-version = "1.71"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -19,9 +19,10 @@ workspace = true
[dependencies] [dependencies]
tower-service = { version = "0.3", default-features = false } tower-service = { version = "0.3", default-features = false }
hyper = { version = "1", default-features = false, features = ["http1", "client"] } hyper = { version = "1", default-features = false, features = ["http1", "client"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy", "tokio"] } hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy"] }
http-body-util = { version = "0.1", default-features = false } http-body-util = { version = "0.1", default-features = false }
tokio = { version = "1", default-features = false } futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] }
hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true } hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
@@ -29,6 +30,8 @@ zeroize = { version = "1", optional = true }
base64ct = { version = "1", features = ["alloc"], optional = true } base64ct = { version = "1", features = ["alloc"], optional = true }
[features] [features]
tls = ["hyper-rustls"] tokio = ["hyper-util/tokio"]
tls = ["tokio", "hyper-rustls"]
webpki-roots = ["tls", "hyper-rustls/webpki-roots"]
basic-auth = ["zeroize", "base64ct"] basic-auth = ["zeroize", "base64ct"]
default = ["tls"] default = ["tls"]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,19 +1,20 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
use core::{pin::Pin, future::Future};
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex; use futures_util::FutureExt;
use ::tokio::sync::Mutex;
use tower_service::Service as TowerService; use tower_service::Service as TowerService;
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest, rt::Executor};
pub use hyper;
use hyper_util::client::legacy::{Client as HyperClient, connect::HttpConnector};
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector}; use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector};
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest};
use hyper_util::{
rt::tokio::TokioExecutor,
client::legacy::{Client as HyperClient, connect::HttpConnector},
};
pub use hyper;
mod request; mod request;
pub use request::*; pub use request::*;
@@ -37,52 +38,86 @@ type Connector = HttpConnector;
type Connector = HttpsConnector<HttpConnector>; type Connector = HttpsConnector<HttpConnector>;
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
enum Connection { enum Connection<
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
ConnectionPool(HyperClient<Connector, Full<Bytes>>), ConnectionPool(HyperClient<Connector, Full<Bytes>>),
Connection { Connection {
executor: E,
connector: Connector, connector: Connector,
host: Uri, host: Uri,
connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>, connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>,
}, },
} }
/// An HTTP client.
///
/// `tls` is only guaranteed to work when using the `tokio` executor. Instantiating a client when
/// the `tls` feature is active without using the `tokio` executor will cause errors.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct Client { pub struct Client<
connection: Connection, E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
connection: Connection<E>,
} }
impl Client { impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
fn connector() -> Connector { Client<E>
{
#[allow(clippy::unnecessary_wraps)]
fn connector() -> Result<Connector, Error> {
let mut res = HttpConnector::new(); let mut res = HttpConnector::new();
res.set_keepalive(Some(core::time::Duration::from_secs(60))); res.set_keepalive(Some(core::time::Duration::from_secs(60)));
res.set_nodelay(true); res.set_nodelay(true);
res.set_reuse_address(true); res.set_reuse_address(true);
#[cfg(feature = "tls")]
if core::any::TypeId::of::<E>() !=
core::any::TypeId::of::<hyper_util::rt::tokio::TokioExecutor>()
{
Err(Error::ConnectionError(
"`tls` feature enabled but not using the `tokio` executor".into(),
))?;
}
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
res.enforce_http(false); res.enforce_http(false);
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
let res = HttpsConnectorBuilder::new() let https = HttpsConnectorBuilder::new().with_native_roots();
.with_native_roots() #[cfg(all(feature = "tls", not(feature = "webpki-roots")))]
.expect("couldn't fetch system's SSL roots") let https = https.map_err(|e| {
.https_or_http() Error::ConnectionError(
.enable_http1() format!("couldn't load system's SSL root certificates and webpki-roots unavilable: {e:?}")
.wrap_connector(res); .into(),
res )
})?;
// Fallback to `webpki-roots` if present
#[cfg(all(feature = "tls", feature = "webpki-roots"))]
let https = https.unwrap_or(HttpsConnectorBuilder::new().with_webpki_roots());
#[cfg(feature = "tls")]
let res = https.https_or_http().enable_http1().wrap_connector(res);
Ok(res)
} }
pub fn with_connection_pool() -> Client { pub fn with_executor_and_connection_pool(executor: E) -> Result<Client<E>, Error> {
Client { Ok(Client {
connection: Connection::ConnectionPool( connection: Connection::ConnectionPool(
HyperClient::builder(TokioExecutor::new()) HyperClient::builder(executor)
.pool_idle_timeout(core::time::Duration::from_secs(60)) .pool_idle_timeout(core::time::Duration::from_secs(60))
.build(Self::connector()), .build(Self::connector()?),
), ),
} })
} }
pub fn without_connection_pool(host: &str) -> Result<Client, Error> { pub fn with_executor_and_without_connection_pool(
executor: E,
host: &str,
) -> Result<Client<E>, Error> {
Ok(Client { Ok(Client {
connection: Connection::Connection { connection: Connection::Connection {
connector: Self::connector(), executor,
connector: Self::connector()?,
host: { host: {
let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?; let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?;
if uri.host().is_none() { if uri.host().is_none() {
@@ -95,9 +130,9 @@ impl Client {
}) })
} }
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_>, Error> { pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_, E>, Error> {
let request: Request = request.into(); let request: Request = request.into();
let mut request = request.0; let Request { mut request, response_size_limit } = request;
if let Some(header_host) = request.headers().get(hyper::header::HOST) { if let Some(header_host) = request.headers().get(hyper::header::HOST) {
match &self.connection { match &self.connection {
Connection::ConnectionPool(_) => {} Connection::ConnectionPool(_) => {}
@@ -131,7 +166,7 @@ impl Client {
Connection::ConnectionPool(client) => { Connection::ConnectionPool(client) => {
client.request(request).await.map_err(Error::HyperUtil)? client.request(request).await.map_err(Error::HyperUtil)?
} }
Connection::Connection { connector, host, connection } => { Connection::Connection { executor, connector, host, connection } => {
let mut connection_lock = connection.lock().await; let mut connection_lock = connection.lock().await;
// If there's not a connection... // If there's not a connection...
@@ -143,28 +178,46 @@ impl Client {
let call_res = call_res.map_err(Error::ConnectionError); let call_res = call_res.map_err(Error::ConnectionError);
let (requester, connection) = let (requester, connection) =
hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?; hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?;
// This will die when we drop the requester, so we don't need to track an AbortHandle // This task will die when we drop the requester
// for it executor.execute(Box::pin(connection.map(|_| ())));
tokio::spawn(connection);
*connection_lock = Some(requester); *connection_lock = Some(requester);
} }
let connection = connection_lock.as_mut().unwrap(); let connection = connection_lock.as_mut().expect("lock over the connection was poisoned");
let mut err = connection.ready().await.err(); let mut err = connection.ready().await.err();
if err.is_none() { if err.is_none() {
// Send the request // Send the request
let res = connection.send_request(request).await; let response = connection.send_request(request).await;
if let Ok(res) = res { if let Ok(response) = response {
return Ok(Response(res, self)); return Ok(Response { response, size_limit: response_size_limit, client: self });
} }
err = res.err(); err = response.err();
} }
// Since this connection has been put into an error state, drop it // Since this connection has been put into an error state, drop it
*connection_lock = None; *connection_lock = None;
Err(Error::Hyper(err.unwrap()))? Err(Error::Hyper(err.expect("only here if `err` is some yet no error")))?
} }
}; };
Ok(Response(response, self)) Ok(Response { response, size_limit: response_size_limit, client: self })
} }
} }
#[cfg(feature = "tokio")]
mod tokio {
use hyper_util::rt::tokio::TokioExecutor;
use super::*;
pub type TokioClient = Client<TokioExecutor>;
impl Client<TokioExecutor> {
pub fn with_connection_pool() -> Result<Self, Error> {
Self::with_executor_and_connection_pool(TokioExecutor::new())
}
pub fn without_connection_pool(host: &str) -> Result<Self, Error> {
Self::with_executor_and_without_connection_pool(TokioExecutor::new(), host)
}
}
}
#[cfg(feature = "tokio")]
pub use tokio::TokioClient;

View File

@@ -7,11 +7,15 @@ pub use http_body_util::Full;
use crate::Error; use crate::Error;
#[derive(Debug)] #[derive(Debug)]
pub struct Request(pub(crate) hyper::Request<Full<Bytes>>); pub struct Request {
pub(crate) request: hyper::Request<Full<Bytes>>,
pub(crate) response_size_limit: Option<usize>,
}
impl Request { impl Request {
#[cfg(feature = "basic-auth")] #[cfg(feature = "basic-auth")]
fn username_password_from_uri(&self) -> Result<(String, String), Error> { fn username_password_from_uri(&self) -> Result<(String, String), Error> {
if let Some(authority) = self.0.uri().authority() { if let Some(authority) = self.request.uri().authority() {
let authority = authority.as_str(); let authority = authority.as_str();
if authority.contains('@') { if authority.contains('@') {
// Decode the username and password from the URI // Decode the username and password from the URI
@@ -36,9 +40,10 @@ impl Request {
let mut formatted = format!("{username}:{password}"); let mut formatted = format!("{username}:{password}");
let mut encoded = Base64::encode_string(formatted.as_bytes()); let mut encoded = Base64::encode_string(formatted.as_bytes());
formatted.zeroize(); formatted.zeroize();
self.0.headers_mut().insert( self.request.headers_mut().insert(
hyper::header::AUTHORIZATION, hyper::header::AUTHORIZATION,
HeaderValue::from_str(&format!("Basic {encoded}")).unwrap(), HeaderValue::from_str(&format!("Basic {encoded}"))
.expect("couldn't form header from base64-encoded string"),
); );
encoded.zeroize(); encoded.zeroize();
} }
@@ -59,9 +64,17 @@ impl Request {
pub fn with_basic_auth(&mut self) { pub fn with_basic_auth(&mut self) {
let _ = self.basic_auth_from_uri(); let _ = self.basic_auth_from_uri();
} }
}
impl From<hyper::Request<Full<Bytes>>> for Request { /// Set a size limit for the response.
fn from(request: hyper::Request<Full<Bytes>>) -> Request { ///
Request(request) /// This may be exceeded by a single HTTP frame and accordingly isn't perfect.
pub fn set_response_size_limit(&mut self, response_size_limit: Option<usize>) {
self.response_size_limit = response_size_limit;
}
}
impl From<hyper::Request<Full<Bytes>>> for Request {
fn from(request: hyper::Request<Full<Bytes>>) -> Request {
Request { request, response_size_limit: None }
} }
} }

View File

@@ -1,24 +1,54 @@
use core::{pin::Pin, future::Future};
use std::io;
use hyper::{ use hyper::{
StatusCode, StatusCode,
header::{HeaderValue, HeaderMap}, header::{HeaderValue, HeaderMap},
body::{Buf, Incoming}, body::Incoming,
rt::Executor,
}; };
use http_body_util::BodyExt; use http_body_util::BodyExt;
use futures_util::{Stream, StreamExt};
use crate::{Client, Error}; use crate::{Client, Error};
// Borrows the client so its async task lives as long as this response exists. // Borrows the client so its async task lives as long as this response exists.
#[allow(dead_code)] #[allow(dead_code)]
#[derive(Debug)] #[derive(Debug)]
pub struct Response<'a>(pub(crate) hyper::Response<Incoming>, pub(crate) &'a Client); pub struct Response<
impl<'a> Response<'a> { 'a,
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
pub(crate) response: hyper::Response<Incoming>,
pub(crate) size_limit: Option<usize>,
pub(crate) client: &'a Client<E>,
}
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Response<'_, E>
{
pub fn status(&self) -> StatusCode { pub fn status(&self) -> StatusCode {
self.0.status() self.response.status()
} }
pub fn headers(&self) -> &HeaderMap<HeaderValue> { pub fn headers(&self) -> &HeaderMap<HeaderValue> {
self.0.headers() self.response.headers()
} }
pub async fn body(self) -> Result<impl std::io::Read, Error> { pub async fn body(self) -> Result<impl std::io::Read, Error> {
Ok(self.0.into_body().collect().await.map_err(Error::Hyper)?.aggregate().reader()) let mut body = self.response.into_body().into_data_stream();
let mut res: Vec<u8> = vec![];
loop {
if let Some(size_limit) = self.size_limit {
let (lower, upper) = body.size_hint();
if res.len().wrapping_add(upper.unwrap_or(lower)) > size_limit.min(usize::MAX - 1) {
Err(Error::ConnectionError("response exceeded size limit".into()))?;
}
}
let Some(part) = body.next().await else { break };
let part = part.map_err(Error::Hyper)?;
res.extend(part.as_ref());
}
Ok(io::Cursor::new(res))
} }
} }

View File

@@ -1,13 +1,13 @@
[package] [package]
name = "std-shims" name = "std-shims"
version = "0.1.4" version = "0.1.5"
description = "A series of std shims to make alloc more feasible" description = "A series of std shims to make alloc more feasible"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims" repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"] keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021" edition = "2021"
rust-version = "1.64" rust-version = "1.65"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -18,9 +18,10 @@ workspace = true
[dependencies] [dependencies]
rustversion = { version = "1", default-features = false } rustversion = { version = "1", default-features = false }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "once", "lazy"] } spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "fair_mutex", "once", "lazy"] }
hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] } hashbrown = { version = "0.16", default-features = false, features = ["default-hasher", "inline-more"], optional = true }
[features] [features]
std = [] alloc = ["hashbrown"]
std = ["alloc", "spin/std"]
default = ["std"] default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,11 +1,28 @@
# std shims # `std` shims
A crate which passes through to std when the default `std` feature is enabled, `std-shims` is a Rust crate with two purposes:
yet provides a series of shims when it isn't. - Expand the functionality of `core` and `alloc`
- Polyfill functionality only available on newer version of Rust
No guarantee of one-to-one parity is provided. The shims provided aim to be sufficient for the The goal is to make supporting no-`std` environments, and older versions of
average case. Rust, as simple as possible. For most use cases, replacing `std::` with
`std_shims::` and adding `use std_shims::prelude::*` is sufficient to take full
advantage of `std-shims`.
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization primitives are provided via # API Surface
`spin` (avoiding a requirement on `critical-section`).
types are not guaranteed to be `std-shims` only aims to have items _mutually available_ between `alloc` (with
extra dependencies) and `std` publicly exposed. Items exclusive to `std`, with
no shims available, will not be exported by `std-shims`.
# Dependencies
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization
primitives are provided via `spin` (avoiding a requirement on
`critical-section`). Sections of `std::io` are independently matched as
possible. `rustversion` is used to detect when to provide polyfills.
# Disclaimer
No guarantee of one-to-one parity is provided. The shims provided aim to be
sufficient for the average case. Pull requests are _welcome_.

View File

@@ -1,7 +1,7 @@
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::collections::*;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use hashbrown::{HashSet, HashMap};
#[cfg(feature = "std")] #[cfg(feature = "std")]
pub use std::collections::*; pub use std::collections::*;
#[cfg(not(feature = "std"))]
pub use alloc::collections::*;
#[cfg(not(feature = "std"))]
pub use hashbrown::{HashSet, HashMap};

View File

@@ -1,42 +1,74 @@
#[cfg(feature = "std")]
pub use std::io::*;
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
mod shims { mod shims {
use core::fmt::{Debug, Formatter}; use core::fmt::{self, Debug, Display, Formatter};
use alloc::{boxed::Box, vec::Vec}; #[cfg(feature = "alloc")]
use extern_alloc::{boxed::Box, vec::Vec};
use crate::error::Error as CoreError;
/// The kind of error.
#[derive(Clone, Copy, PartialEq, Eq, Debug)] #[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum ErrorKind { pub enum ErrorKind {
UnexpectedEof, UnexpectedEof,
Other, Other,
} }
/// An error.
#[derive(Debug)]
pub struct Error { pub struct Error {
kind: ErrorKind, kind: ErrorKind,
error: Box<dyn Send + Sync>, #[cfg(feature = "alloc")]
error: Box<dyn Send + Sync + CoreError>,
} }
impl Debug for Error { impl Display for Error {
fn fmt(&self, fmt: &mut Formatter<'_>) -> core::result::Result<(), core::fmt::Error> { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
fmt.debug_struct("Error").field("kind", &self.kind).finish_non_exhaustive() <Self as Debug>::fmt(self, f)
} }
} }
impl CoreError for Error {}
#[cfg(not(feature = "alloc"))]
pub trait IntoBoxSendSyncError {}
#[cfg(not(feature = "alloc"))]
impl<I> IntoBoxSendSyncError for I {}
#[cfg(feature = "alloc")]
pub trait IntoBoxSendSyncError: Into<Box<dyn Send + Sync + CoreError>> {}
#[cfg(feature = "alloc")]
impl<I: Into<Box<dyn Send + Sync + CoreError>>> IntoBoxSendSyncError for I {}
impl Error { impl Error {
pub fn new<E: 'static + Send + Sync>(kind: ErrorKind, error: E) -> Error { /// Create a new error.
Error { kind, error: Box::new(error) } ///
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn new<E: 'static + IntoBoxSendSyncError>(kind: ErrorKind, error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind };
#[cfg(feature = "alloc")]
let res = Error { kind, error: error.into() };
res
} }
pub fn other<E: 'static + Send + Sync>(error: E) -> Error { /// Create a new error with `io::ErrorKind::Other` as its kind.
Error { kind: ErrorKind::Other, error: Box::new(error) } ///
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn other<E: 'static + IntoBoxSendSyncError>(error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind: ErrorKind::Other };
#[cfg(feature = "alloc")]
let res = Error { kind: ErrorKind::Other, error: error.into() };
res
} }
/// The kind of error.
pub fn kind(&self) -> ErrorKind { pub fn kind(&self) -> ErrorKind {
self.kind self.kind
} }
pub fn into_inner(self) -> Option<Box<dyn Send + Sync>> { /// Retrieve the inner error.
#[cfg(feature = "alloc")]
pub fn into_inner(self) -> Option<Box<dyn Send + Sync + CoreError>> {
Some(self.error) Some(self.error)
} }
} }
@@ -64,6 +96,12 @@ mod shims {
} }
} }
impl<R: Read> Read for &mut R {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
R::read(*self, buf)
}
}
pub trait BufRead: Read { pub trait BufRead: Read {
fn fill_buf(&mut self) -> Result<&[u8]>; fn fill_buf(&mut self) -> Result<&[u8]>;
fn consume(&mut self, amt: usize); fn consume(&mut self, amt: usize);
@@ -88,6 +126,7 @@ mod shims {
} }
} }
#[cfg(feature = "alloc")]
impl Write for Vec<u8> { impl Write for Vec<u8> {
fn write(&mut self, buf: &[u8]) -> Result<usize> { fn write(&mut self, buf: &[u8]) -> Result<usize> {
self.extend(buf); self.extend(buf);
@@ -95,6 +134,8 @@ mod shims {
} }
} }
} }
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
pub use shims::*; pub use shims::*;
#[cfg(feature = "std")]
pub use std::io::{ErrorKind, Error, Result, Read, BufRead, Write};

View File

@@ -2,17 +2,44 @@
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]
pub extern crate alloc; #[cfg(not(feature = "alloc"))]
pub use core::*;
#[cfg(not(feature = "alloc"))]
pub use core::{alloc, borrow, ffi, fmt, slice, str, task};
#[cfg(not(feature = "std"))]
#[rustversion::before(1.81)]
pub mod error {
use core::fmt::Debug::Display;
pub trait Error: Debug + Display {}
}
#[cfg(not(feature = "std"))]
#[rustversion::since(1.81)]
pub use core::error;
#[cfg(feature = "alloc")]
extern crate alloc as extern_alloc;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::{alloc, borrow, boxed, ffi, fmt, rc, slice, str, string, task, vec, format};
#[cfg(feature = "std")]
pub use std::{alloc, borrow, boxed, error, ffi, fmt, rc, slice, str, string, task, vec, format};
pub mod sync;
pub mod collections; pub mod collections;
pub mod io; pub mod io;
pub mod sync;
pub use alloc::vec;
pub use alloc::str;
pub use alloc::string;
pub mod prelude { pub mod prelude {
// Shim the `std` prelude
#[cfg(feature = "alloc")]
pub use extern_alloc::{
format, vec,
borrow::ToOwned,
boxed::Box,
vec::Vec,
string::{String, ToString},
};
// Shim `div_ceil`
#[rustversion::before(1.73)] #[rustversion::before(1.73)]
#[doc(hidden)] #[doc(hidden)]
pub trait StdShimsDivCeil { pub trait StdShimsDivCeil {
@@ -53,6 +80,7 @@ pub mod prelude {
} }
} }
// Shim `io::Error::other`
#[cfg(feature = "std")] #[cfg(feature = "std")]
#[rustversion::before(1.74)] #[rustversion::before(1.74)]
#[doc(hidden)] #[doc(hidden)]

View File

@@ -1,19 +1,28 @@
pub use core::sync::*; pub use core::sync::atomic;
pub use alloc::sync::*; #[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::sync::{Arc, Weak};
#[cfg(feature = "std")]
pub use std::sync::{Arc, Weak};
mod mutex_shim { mod mutex_shim {
#[cfg(feature = "std")]
pub use std::sync::*;
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
pub use spin::*; pub use spin::{Mutex, MutexGuard};
#[cfg(feature = "std")]
pub use std::sync::{Mutex, MutexGuard};
/// A shimmed `Mutex` with an API mutual to `spin` and `std`.
#[derive(Default, Debug)] #[derive(Default, Debug)]
pub struct ShimMutex<T>(Mutex<T>); pub struct ShimMutex<T>(Mutex<T>);
impl<T> ShimMutex<T> { impl<T> ShimMutex<T> {
/// Construct a new `Mutex`.
pub const fn new(value: T) -> Self { pub const fn new(value: T) -> Self {
Self(Mutex::new(value)) Self(Mutex::new(value))
} }
/// Acquire a lock on the contents of the `Mutex`.
///
/// On no-`std` environments, this may spin until the lock is acquired. On `std` environments,
/// this may panic if the `Mutex` was poisoned.
pub fn lock(&self) -> MutexGuard<'_, T> { pub fn lock(&self) -> MutexGuard<'_, T> {
#[cfg(feature = "std")] #[cfg(feature = "std")]
let res = self.0.lock().unwrap(); let res = self.0.lock().unwrap();
@@ -25,10 +34,11 @@ mod mutex_shim {
} }
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard}; pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::before(1.80)] #[rustversion::before(1.80)]
#[cfg(feature = "std")] pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock; pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)] #[rustversion::since(1.80)]
#[cfg(feature = "std")] #[cfg(feature = "std")]

22
common/task/Cargo.toml Normal file
View File

@@ -0,0 +1,22 @@
[package]
name = "serai-task"
version = "0.1.0"
description = "A task schema for Serai services"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/common/task"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.75"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
log = { version = "0.4", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["macros", "sync", "time"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2024 Luke Parker Copyright (c) 2022-2025 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

3
common/task/README.md Normal file
View File

@@ -0,0 +1,3 @@
# Task
A schema to define tasks to be run ad infinitum.

161
common/task/src/lib.rs Normal file
View File

@@ -0,0 +1,161 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{
fmt::{self, Debug},
future::Future,
time::Duration,
};
use tokio::sync::mpsc;
mod type_name;
/// A handle for a task.
///
/// The task will only stop running once all handles for it are dropped.
//
// `run_now` isn't infallible if the task may have been closed. `run_now` on a closed task would
// either need to panic (historic behavior), silently drop the fact the task can't be run, or
// return an error. Instead of having a potential panic, and instead of modeling the error
// behavior, this task can't be closed unless all handles are dropped, ensuring calls to `run_now`
// are infallible.
#[derive(Clone)]
pub struct TaskHandle {
run_now: mpsc::Sender<()>,
#[allow(dead_code)] // This is used to track if all handles have been dropped
close: mpsc::Sender<()>,
}
/// A task's internal structures.
pub struct Task {
run_now: mpsc::Receiver<()>,
close: mpsc::Receiver<()>,
}
impl Task {
/// Create a new task definition.
pub fn new() -> (Self, TaskHandle) {
// Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as
// soon as possible
let (run_now_send, run_now_recv) = mpsc::channel(1);
// And any call to close satisfies all calls to close
let (close_send, close_recv) = mpsc::channel(1);
(
Self { run_now: run_now_recv, close: close_recv },
TaskHandle { run_now: run_now_send, close: close_send },
)
}
}
impl TaskHandle {
/// Tell the task to run now (and not whenever its next iteration on a timer is).
pub fn run_now(&self) {
#[allow(clippy::match_same_arms)]
match self.run_now.try_send(()) {
Ok(()) => {}
// NOP on full, as this task will already be ran as soon as possible
Err(mpsc::error::TrySendError::Full(())) => {}
Err(mpsc::error::TrySendError::Closed(())) => {
// The task should only be closed if all handles are dropped, and this one hasn't been
panic!("task was unexpectedly closed when calling run_now")
}
}
}
}
/// An enum which can't be constructed, representing that the task does not error.
pub enum DoesNotError {}
impl Debug for DoesNotError {
fn fmt(&self, _: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
// This type can't be constructed so we'll never have a `&self` to call this fn with
unreachable!()
}
}
/// A task to be continually ran.
pub trait ContinuallyRan: Sized + Send {
/// The amount of seconds before this task should be polled again.
const DELAY_BETWEEN_ITERATIONS: u64 = 5;
/// The maximum amount of seconds before this task should be run again.
///
/// Upon error, the amount of time waited will be linearly increased until this limit.
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 120;
/// The error potentially yielded upon running an iteration of this task.
type Error: Debug;
/// Run an iteration of the task.
///
/// If this returns `true`, all dependents of the task will immediately have a new iteration ran
/// (without waiting for whatever timer they were already on).
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>>;
/// Continually run the task.
fn continually_run(
mut self,
mut task: Task,
dependents: Vec<TaskHandle>,
) -> impl Send + Future<Output = ()> {
async move {
// The default number of seconds to sleep before running the task again
let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS;
// The current number of seconds to sleep before running the task again
// We increment this upon errors in order to not flood the logs with errors
let mut current_sleep_before_next_task = default_sleep_before_next_task;
let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| {
let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task;
// Set a limit of sleeping for two minutes
*current_sleep_before_next_task = new_sleep.max(Self::MAX_DELAY_BETWEEN_ITERATIONS);
};
loop {
// If we were told to close/all handles were dropped, drop it
{
let should_close = task.close.try_recv();
match should_close {
Ok(()) | Err(mpsc::error::TryRecvError::Disconnected) => break,
Err(mpsc::error::TryRecvError::Empty) => {}
}
}
match self.run_iteration().await {
Ok(run_dependents) => {
// Upon a successful (error-free) loop iteration, reset the amount of time we sleep
current_sleep_before_next_task = default_sleep_before_next_task;
if run_dependents {
for dependent in &dependents {
dependent.run_now();
}
}
}
Err(e) => {
// Get the type name
let type_name = type_name::strip_type_name(core::any::type_name::<Self>());
// Print the error as a warning, prefixed by the task's type
log::warn!("{type_name}: {e:?}");
increase_sleep_before_next_task(&mut current_sleep_before_next_task);
}
}
// Don't run the task again for another few seconds UNLESS told to run now
/*
We could replace tokio::mpsc with async_channel, tokio::time::sleep with
patchable_async_sleep::sleep, and tokio::select with futures_lite::future::or
It isn't worth the effort when patchable_async_sleep::sleep will still resolve to tokio
*/
tokio::select! {
() = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {},
msg = task.run_now.recv() => {
// Check if this is firing because the handle was dropped
if msg.is_none() {
break;
}
},
}
}
}
}
}

View File

@@ -0,0 +1,31 @@
/// Strip the modules from a type name.
// This may be of the form `a::b::C`, in which case we only want `C`
pub(crate) fn strip_type_name(full_type_name: &'static str) -> String {
// It also may be `a::b::C<d::e::F>`, in which case, we only attempt to strip `a::b`
let mut by_generics = full_type_name.split('<');
// Strip to just `C`
let full_outer_object_name = by_generics.next().unwrap();
let mut outer_object_name_parts = full_outer_object_name.split("::");
let mut last_part_in_outer_object_name = outer_object_name_parts.next().unwrap();
for part in outer_object_name_parts {
last_part_in_outer_object_name = part;
}
// Push back on the generic terms
let mut type_name = last_part_in_outer_object_name.to_string();
for generic in by_generics {
type_name.push('<');
type_name.push_str(generic);
}
type_name
}
#[test]
fn test_strip_type_name() {
assert_eq!(strip_type_name("core::option::Option"), "Option");
assert_eq!(
strip_type_name("core::option::Option<alloc::string::String>"),
"Option<alloc::string::String>"
);
}

View File

@@ -7,7 +7,9 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.77" # This must be specified with the patch version, else Rust believes `1.77` < `1.77.0` and will
# refuse to compile due to relying on versions introduced with `1.77.0`
rust-version = "1.77.0"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -17,50 +17,45 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
async-trait = { version = "0.1", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std"] }
bitvec = { version = "1", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] } blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] }
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] } dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] }
ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] } ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std", "aggregate"] } dkg = { package = "dkg-musig", path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
dkg-musig = { path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../crypto/frost" } frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" } frost-schnorrkel = { path = "../crypto/schnorrkel" }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
zalloc = { path = "../common/zalloc" }
serai-db = { path = "../common/db" }
serai-env = { path = "../common/env" }
processor-messages = { package = "serai-processor-messages", path = "../processor/messages" }
message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary = { package = "tributary-chain", path = "./tributary" }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
zalloc = { path = "../common/zalloc" }
serai-db = { path = "../common/db" }
serai-env = { path = "../common/env" }
serai-task = { path = "../common/task", version = "0.1" }
messages = { package = "serai-processor-messages", path = "../processor/messages" }
message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary-sdk = { path = "./tributary-sdk" }
serai-client-serai = { path = "../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
env_logger = { version = "0.10", default-features = false, features = ["humantime"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] }
futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["time", "sync", "macros", "rt-multi-thread"] }
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] }
libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "request-response", "gossipsub", "macros"] }
[dev-dependencies] serai-cosign = { path = "./cosign" }
tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] } serai-coordinator-substrate = { path = "./substrate" }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] } serai-coordinator-tributary = { path = "./tributary" }
sp-runtime = { git = "https://github.com/serai-dex/patch-polkadot-sdk", rev = "da19e1f8ca7a9e2cbf39fbfa493918eeeb45e10b", default-features = false, features = ["std"] } serai-coordinator-p2p = { path = "./p2p" }
serai-coordinator-libp2p-p2p = { path = "./p2p/libp2p" }
[features] [features]
longer-reattempts = [] longer-reattempts = ["serai-coordinator-tributary/longer-reattempts"]
parity-db = ["serai-db/parity-db"] parity-db = ["serai-db/parity-db"]
rocksdb = ["serai-db/rocksdb"] rocksdb = ["serai-db/rocksdb"]

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,7 +1,29 @@
# Coordinator # Coordinator
The Serai coordinator communicates with other coordinators to prepare batches - [`tendermint`](/tributary/tendermint) is an implementation of the Tendermint
for Serai and sign transactions. BFT algorithm.
In order to achieve consensus over gossip, and order certain events, a - [`tributary-sdk`](./tributary-sdk) is a micro-blockchain framework. Instead
micro-blockchain is instantiated. of a producing a blockchain daemon like the Polkadot SDK or Cosmos SDK intend
to, `tributary` is solely intended to be an embedded asynchronous task within
an application.
The Serai coordinator spawns a tributary for each validator set it's
coordinating. This allows the participating validators to communicate in a
byzantine-fault-tolerant manner (relying on Tendermint for consensus).
- [`cosign`](./cosign) contains a library to decide which Substrate blocks
should be cosigned and to evaluate cosigns.
- [`substrate`](./substrate) contains a library to index the Substrate
blockchain and handle its events.
- [`tributary`](./tributary) is our instantiation of the Tributary SDK for the
Serai processor. It includes the `Transaction` definition and deferred
execution logic.
- [`p2p`](./p2p) is our abstract P2P API to service the Coordinator.
- [`libp2p`](./p2p/libp2p) is our libp2p-backed implementation of the P2P API.
- [`src`](./src) contains the source code for the Coordinator binary itself.

View File

@@ -0,0 +1,33 @@
[package]
name = "serai-cosign"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" }
serai-cosign-types = { path = "./types" }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -0,0 +1,121 @@
# Serai Cosign
The Serai blockchain is controlled by a set of validators referred to as the
Serai validators. These validators could attempt to double-spend, even if every
node on the network is a full node, via equivocating.
Posit:
- The Serai validators control X SRI
- The Serai validators produce block A swapping X SRI to Y XYZ
- The Serai validators produce block B swapping X SRI to Z ABC
- The Serai validators finalize block A and send to the validators for XYZ
- The Serai validators finalize block B and send to the validators for ABC
This is solved via the cosigning protocol. The validators for XYZ and the
validators for ABC each sign their view of the Serai blockchain, communicating
amongst each other to ensure consistency.
The security of the cosigning protocol is not formally proven, and there are no
claims it achieves Byzantine Fault Tolerance. This protocol is meant to be
practical and make such attacks infeasible, when they could already be argued
difficult to perform.
### Definitions
- Cosign: A signature from a non-Serai validator set for a Serai block
- Cosign Commit: A collection of cosigns which achieve the necessary weight
### Methodology
Finalized blocks from the Serai network are intended to be cosigned if they
contain burn events. Only once cosigned should non-Serai validators process
them.
Cosigning occurs by a non-Serai validator set, using their threshold keys
declared on the Serai blockchain. Once 83% of non-Serai validator sets, by
weight, cosign a block, a cosign commit is formed. A cosign commit for a block
is considered to also cosign for all blocks preceding it.
### Bounds Under Asynchrony
Assuming an asynchronous environment fully controlled by the adversary, 34% of
a validator set may cause an equivocation. Control of 67% of non-Serai
validator sets, by weight, is sufficient to produce two distinct cosign commits
at the same position. This is due to the honest stake, 33%, being split across
the two candidates (67% + 16.5% = 83.5%, just over the threshold). This means
the cosigning protocol may produce multiple cosign commits if 34% of 67%, just
22.78%, of the non-Serai validator sets, is malicious. This would be in
conjunction with 34% of the Serai validator set (assumed 20% of total stake),
for a total stake requirement of 34% of 20% + 22.78% of 80% (25.024%). This is
an increase from the 6.8% required without the cosigning protocol.
### Bounds Under Synchrony
Assuming the honest stake within the non-Serai validator sets detect the
malicious stake within their set prior to assisting in producing a cosign for
their set, for which there is a multi-second window, 67% of 67% of non-Serai
validator sets is required to produce cosigns for those sets. This raises the
total stake requirement to 42.712% (past the usual 34% threshold).
### Behavior Reliant on Synchrony
If the Serai blockchain node detects an equivocation, it will stop responding
to all RPC requests and stop participating in finalizing further blocks. This
lets the node communicate the equivocating commits to other nodes (causing them
to exhibit the same behavior), yet prevents interaction with it.
If cosigns representing 17% of the non-Serai validators sets by weight are
detected for distinct blocks at the same position, the protocol halts. An
explicit latency period of seventy seconds is enacted after receiving a cosign
commit for the detection of such an equivocation. This is largely redundant
given how the Serai blockchain node will presumably have halted itself by this
time.
### Equivocation-Detection Avoidance
Malicious Serai validators could avoid detection of their equivocating if they
produced two distinct blockchains, A and B, with different keys declared for
the same non-Serai validator set. While the validators following A may detect
the cosigns for distinct blocks by validators following B, the cosigns would be
assumed invalid due to their signatures being verified against distinct keys.
This is prevented by requiring cosigns on the blocks which declare new keys,
ensuring all validators have a consistent view of the keys used within the
cosigning protocol (per the bounds of the cosigning protocol). These blocks are
exempt from the general policy of cosign commits cosigning all prior blocks,
preventing the newly declared keys (which aren't yet cosigned) from being used
to cosign themselves. These cosigns are flagged as "notable", are permanently
archived, and must be synced before a validator will move forward.
Cosigning the block which declares new keys also ensures agreement on the
preceding block which declared the new set, with an exact specification of the
participants and their weight, before it impacts the cosigning protocol.
### Denial of Service Concerns
Any historical Serai validator set may trigger a chain halt by producing an
equivocation after their retiry. This requires 67% to be malicious. 34% of the
active Serai validator set may also trigger a chain halt.
17% of non-Serai validator sets equivocating causing a halt means 5.67% of
non-Serai validator sets' stake may cause a halt (in an asynchronous
environment fully controlled by the adversary). In a synchronous environment
where the honest stake cannot be split across two candidates, 11.33% of
non-Serai validator sets' stake is required.
The more practical attack is for one to obtain 5.67% of non-Serai validator
sets' stake, under any network conditions, and simply go offline. This will
take 17% of validator sets offline with it, preventing any cosign commits
from being performed. A fallback protocol where validators individually produce
cosigns, removing the network's horizontal scalability but ensuring liveness,
prevents this, restoring the additional requirements for control of an
asynchronous network or 11.33% of non-Serai validator sets' stake.
### TODO
The Serai node no longer responding to RPC requests upon detecting any
equivocation, and the fallback protocol where validators individually produce
signatures, are not implemented at this time. The former means the detection of
equivocating cosigns is not redundant and the latter makes 5.67% of non-Serai
validator sets' stake the DoS threshold, even without control of an
asynchronous network.

View File

@@ -0,0 +1,57 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_db::*;
use serai_task::{DoesNotError, ContinuallyRan};
use crate::evaluator::CosignedBlocks;
/// How often callers should broadcast the cosigns flagged for rebroadcasting.
pub const BROADCAST_FREQUENCY: Duration = Duration::from_secs(60);
const SYNCHRONY_EXPECTATION: Duration = Duration::from_secs(10);
const ACKNOWLEDGEMENT_DELAY: Duration =
Duration::from_secs(BROADCAST_FREQUENCY.as_secs() + SYNCHRONY_EXPECTATION.as_secs());
create_db!(
SubstrateCosignDelay {
// The latest cosigned block number.
LatestCosignedBlockNumber: () -> u64,
}
);
/// A task to delay acknowledgement of cosigns.
pub(crate) struct CosignDelayTask<D: Db> {
pub(crate) db: D,
}
impl<D: Db> ContinuallyRan for CosignDelayTask<D> {
type Error = DoesNotError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
loop {
let mut txn = self.db.txn();
// Receive the next block to mark as cosigned
let Some((block_number, time_evaluated)) = CosignedBlocks::try_recv(&mut txn) else {
break;
};
// Calculate when we should mark it as valid
let time_valid =
SystemTime::UNIX_EPOCH + Duration::from_secs(time_evaluated) + ACKNOWLEDGEMENT_DELAY;
// Sleep until then
tokio::time::sleep(SystemTime::now().duration_since(time_valid).unwrap_or(Duration::ZERO))
.await;
// Set the cosigned block
LatestCosignedBlockNumber::set(&mut txn, &block_number);
txn.commit();
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -0,0 +1,246 @@
use core::future::Future;
use std::time::{Duration, Instant, SystemTime};
use serai_db::*;
use serai_task::ContinuallyRan;
use crate::{
HasEvents, GlobalSession, NetworksLatestCosignedBlock, RequestNotableCosigns,
intend::{GlobalSessionsChannel, BlockEventData, BlockEvents},
};
create_db!(
SubstrateCosignEvaluator {
// The global session currently being evaluated.
CurrentlyEvaluatedGlobalSession: () -> ([u8; 32], GlobalSession),
}
);
db_channel!(
SubstrateCosignEvaluatorChannels {
// (cosigned block, time cosign was evaluated)
CosignedBlocks: () -> (u64, u64),
}
);
// This is a strict function which won't panic, even with a malicious Serai node, so long as:
// - It's called incrementally (with an increment of 1)
// - It's only called for block numbers we've completed indexing on within the intend task
// - It's only called for block numbers after a global session has started
// - The global sessions channel is populated as the block declaring the session is indexed
// Which all hold true within the context of this task and the intend task.
//
// This function will also ensure the currently evaluated global session is incremented once we
// finish evaluation of the prior session.
fn currently_evaluated_global_session_strict(
txn: &mut impl DbTxn,
block_number: u64,
) -> ([u8; 32], GlobalSession) {
let mut res = {
let existing = match CurrentlyEvaluatedGlobalSession::get(txn) {
Some(existing) => existing,
None => {
let first = GlobalSessionsChannel::try_recv(txn)
.expect("fetching latest global session yet none declared");
CurrentlyEvaluatedGlobalSession::set(txn, &first);
first
}
};
assert!(
existing.1.start_block_number <= block_number,
"candidate's start block number exceeds our block number"
);
existing
};
if let Some(next) = GlobalSessionsChannel::peek(txn) {
assert!(
block_number <= next.1.start_block_number,
"currently_evaluated_global_session_strict wasn't called incrementally"
);
// If it's time for this session to activate, take it from the channel and set it
if block_number == next.1.start_block_number {
GlobalSessionsChannel::try_recv(txn).unwrap();
CurrentlyEvaluatedGlobalSession::set(txn, &next);
res = next;
}
}
res
}
pub(crate) fn currently_evaluated_global_session(getter: &impl Get) -> Option<[u8; 32]> {
CurrentlyEvaluatedGlobalSession::get(getter).map(|(id, _info)| id)
}
/// A task to determine if a block has been cosigned and we should handle it.
pub(crate) struct CosignEvaluatorTask<D: Db, R: RequestNotableCosigns> {
pub(crate) db: D,
pub(crate) request: R,
pub(crate) last_request_for_cosigns: Instant,
}
impl<D: Db, R: RequestNotableCosigns> ContinuallyRan for CosignEvaluatorTask<D, R> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
let should_request_cosigns = |last_request_for_cosigns: &mut Instant| {
const REQUEST_COSIGNS_SPACING: Duration = Duration::from_secs(60);
if Instant::now() < (*last_request_for_cosigns + REQUEST_COSIGNS_SPACING) {
return false;
}
*last_request_for_cosigns = Instant::now();
true
};
async move {
let mut known_cosign = None;
let mut made_progress = false;
loop {
let mut txn = self.db.txn();
let Some(BlockEventData { block_number, has_events }) = BlockEvents::try_recv(&mut txn)
else {
break;
};
// Fetch the global session information
let (global_session, global_session_info) =
currently_evaluated_global_session_strict(&mut txn, block_number);
match has_events {
// Because this had notable events, we require an explicit cosign for this block by a
// supermajority of the prior block's validator sets
HasEvents::Notable => {
let mut weight_cosigned = 0;
for set in global_session_info.sets {
// Check if we have the cosign from this set
if NetworksLatestCosignedBlock::get(&txn, global_session, set.network)
.map(|signed_cosign| signed_cosign.cosign.block_number) ==
Some(block_number)
{
// Since have this cosign, add the set's weight to the weight which has cosigned
weight_cosigned +=
global_session_info.stakes.get(&set.network).ok_or_else(|| {
"ValidatorSet in global session yet didn't have its stake".to_string()
})?;
}
}
// Check if the sum weight doesn't cross the required threshold
if weight_cosigned < (((global_session_info.total_stake * 83) / 100) + 1) {
// Request the necessary cosigns over the network
if should_request_cosigns(&mut self.last_request_for_cosigns) {
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
}
// We return an error so the delay before this task is run again increases
return Err(format!(
"notable block (#{block_number}) wasn't yet cosigned. this should resolve shortly",
));
}
log::info!("marking notable block #{block_number} as cosigned");
}
// Since this block didn't have any notable events, we simply require a cosign for this
// block or a greater block by the current validator sets
HasEvents::NonNotable => {
// Check if this was satisfied by a cached result which wasn't calculated incrementally
let known_cosigned = if let Some(known_cosign) = known_cosign {
known_cosign >= block_number
} else {
// Clear `known_cosign` which is no longer helpful
known_cosign = None;
false
};
// If it isn't already known to be cosigned, evaluate the latest cosigns
if !known_cosigned {
/*
LatestCosign is populated with the latest cosigns for each network which don't
exceed the latest global session we've evaluated the start of. This current block
is during the latest global session we've evaluated the start of.
*/
let mut weight_cosigned = 0;
let mut lowest_common_block: Option<u64> = None;
for set in global_session_info.sets {
// Check if this set cosigned this block or not
let Some(cosign) =
NetworksLatestCosignedBlock::get(&txn, global_session, set.network)
else {
continue;
};
if cosign.cosign.block_number >= block_number {
weight_cosigned +=
global_session_info.stakes.get(&set.network).ok_or_else(|| {
"ValidatorSet in global session yet didn't have its stake".to_string()
})?;
}
// Update the lowest block common to all of these cosigns
lowest_common_block = lowest_common_block
.map(|existing| existing.min(cosign.cosign.block_number))
.or(Some(cosign.cosign.block_number));
}
// Check if the sum weight doesn't cross the required threshold
if weight_cosigned < (((global_session_info.total_stake * 83) / 100) + 1) {
// Request the superseding notable cosigns over the network
// If this session hasn't yet produced notable cosigns, then we presume we'll see
// the desired non-notable cosigns as part of normal operations, without needing to
// explicitly request them
if should_request_cosigns(&mut self.last_request_for_cosigns) {
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
}
// We return an error so the delay before this task is run again increases
return Err(format!(
"block (#{block_number}) wasn't yet cosigned. this should resolve shortly",
));
}
// Update the cached result for the block we know is cosigned
/*
There may be a higher block which was cosigned, but once we get to this block,
we'll re-evaluate and find it then. The alternative would be an optimistic
re-evaluation now. Both are fine, so the lower-complexity option is preferred.
*/
known_cosign = lowest_common_block;
}
log::debug!("marking non-notable block #{block_number} as cosigned");
}
// If this block has no events necessitating cosigning, we can immediately consider the
// block cosigned (making this block a NOP)
HasEvents::No => {}
}
// Since we checked we had the necessary cosigns, send it for delay before acknowledgement
CosignedBlocks::send(
&mut txn,
&(
block_number,
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap_or(Duration::ZERO)
.as_secs(),
),
);
txn.commit();
if (block_number % 500) == 0 {
log::info!("marking block #{block_number} as cosigned");
}
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -0,0 +1,273 @@
use core::future::Future;
use std::{sync::Arc, collections::HashMap};
use blake2::{Digest, Blake2b256};
use serai_client_serai::{
abi::{
primitives::{
network_id::{ExternalNetworkId, NetworkId},
balance::Amount,
crypto::Public,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
merkle::IncrementalUnbalancedMerkleTree,
},
validator_sets::Event,
},
Serai, Events,
};
use serai_db::*;
use serai_task::ContinuallyRan;
use crate::*;
#[derive(BorshSerialize, BorshDeserialize)]
struct Set {
session: Session,
key: Public,
stake: Amount,
}
create_db!(
CosignIntend {
ScanCosignFrom: () -> u64,
BuildsUpon: () -> IncrementalUnbalancedMerkleTree,
Stakes: (network: ExternalNetworkId, validator: SeraiAddress) -> Amount,
Validators: (set: ExternalValidatorSet) -> Vec<SeraiAddress>,
LatestSet: (network: ExternalNetworkId) -> Set,
}
);
#[derive(Debug, BorshSerialize, BorshDeserialize)]
pub(crate) struct BlockEventData {
pub(crate) block_number: u64,
pub(crate) has_events: HasEvents,
}
db_channel! {
CosignIntendChannels {
GlobalSessionsChannel: () -> ([u8; 32], GlobalSession),
BlockEvents: () -> BlockEventData,
IntendedCosigns: (set: ExternalValidatorSet) -> CosignIntent,
}
}
async fn block_has_events_justifying_a_cosign(
serai: &Serai,
block_number: u64,
) -> Result<(Block, Events, HasEvents), String> {
let block = serai
.block_by_number(block_number)
.await
.map_err(|e| format!("{e:?}"))?
.ok_or_else(|| "couldn't get block which should've been finalized".to_string())?;
let events = serai.events(block.header.hash()).await.map_err(|e| format!("{e:?}"))?;
if events.validator_sets().set_keys_events().next().is_some() {
return Ok((block, events, HasEvents::Notable));
}
if events.coins().burn_with_instruction_events().next().is_some() {
return Ok((block, events, HasEvents::NonNotable));
}
Ok((block, events, HasEvents::No))
}
// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this
// block.
fn cosigning_sets(getter: &impl Get) -> Vec<(ExternalValidatorSet, Public, Amount)> {
let mut sets = vec![];
for network in ExternalNetworkId::all() {
let Some(Set { session, key, stake }) = LatestSet::get(getter, network) else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ExternalValidatorSet { network, session }, key, stake));
}
sets
}
/// A task to determine which blocks we should intend to cosign.
pub(crate) struct CosignIntendTask<D: Db> {
pub(crate) db: D,
pub(crate) serai: Arc<Serai>,
}
impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1);
let latest_block_number =
self.serai.latest_finalized_block_number().await.map_err(|e| format!("{e:?}"))?;
for block_number in start_block_number ..= latest_block_number {
let mut txn = self.db.txn();
let (block, events, mut has_events) =
block_has_events_justifying_a_cosign(&self.serai, block_number)
.await
.map_err(|e| format!("{e:?}"))?;
let mut builds_upon =
BuildsUpon::get(&txn).unwrap_or(IncrementalUnbalancedMerkleTree::new());
// Check we are indexing a linear chain
if block.header.builds_upon() !=
builds_upon.clone().calculate(serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG)
{
Err(format!(
"node's block #{block_number} doesn't build upon the block #{} prior indexed",
block_number - 1
))?;
}
let block_hash = block.header.hash();
SubstrateBlockHash::set(&mut txn, block_number, &block_hash);
builds_upon.append(
serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG,
Blake2b256::new_with_prefix([serai_client_serai::abi::BLOCK_HEADER_LEAF_TAG])
.chain_update(block_hash.0)
.finalize()
.into(),
);
BuildsUpon::set(&mut txn, &builds_upon);
// Update the stakes
for event in events.validator_sets().allocation_events() {
let Event::Allocation { validator, network, amount } = event else {
panic!("event from `allocation_events` wasn't `Event::Allocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 + amount.0));
}
for event in events.validator_sets().deallocation_events() {
let Event::Deallocation { validator, network, amount, timeline: _ } = event else {
panic!("event from `deallocation_events` wasn't `Event::Deallocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 - amount.0));
}
// Handle decided sets
for event in events.validator_sets().set_decided_events() {
let Event::SetDecided { set, validators } = event else {
panic!("event from `set_decided_events` wasn't `Event::SetDecided`")
};
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
Validators::set(
&mut txn,
set,
&validators.iter().map(|(validator, _key_shares)| *validator).collect(),
);
}
// Handle declarations of the latest set
for event in events.validator_sets().set_keys_events() {
let Event::SetKeys { set, key_pair } = event else {
panic!("event from `set_keys_events` wasn't `Event::SetKeys`")
};
let mut stake = 0;
for validator in
Validators::take(&mut txn, *set).expect("set which wasn't decided set keys")
{
stake += Stakes::get(&txn, set.network, validator).unwrap_or(Amount(0)).0;
}
LatestSet::set(
&mut txn,
set.network,
&Set { session: set.session, key: key_pair.0, stake: Amount(stake) },
);
}
let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn);
// If this is notable, it creates a new global session, which we index into the database
// now
if has_events == HasEvents::Notable {
let sets_and_keys_and_stakes = cosigning_sets(&txn);
let global_session = GlobalSession::id(
sets_and_keys_and_stakes.iter().map(|(set, _key, _stake)| *set).collect(),
);
let mut sets = Vec::with_capacity(sets_and_keys_and_stakes.len());
let mut keys = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut stakes = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut total_stake = 0;
for (set, key, stake) in sets_and_keys_and_stakes {
sets.push(set);
keys.insert(set.network, key);
stakes.insert(set.network, stake.0);
total_stake += stake.0;
}
if total_stake == 0 {
Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?;
}
let global_session_info = GlobalSession {
// This session starts cosigning after this block, as this block must be cosigned by
// the existing validators
start_block_number: block_number + 1,
sets,
keys,
stakes,
total_stake,
};
GlobalSessions::set(&mut txn, global_session, &global_session_info);
if let Some(ending_global_session) = global_session_for_this_block {
GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number);
}
LatestGlobalSessionIntended::set(&mut txn, &global_session);
GlobalSessionsChannel::send(&mut txn, &(global_session, global_session_info));
}
// If there isn't anyone available to cosign this block, meaning it'll never be cosigned,
// we flag it as not having any events requiring cosigning so we don't attempt to
// sign/require a cosign for it
if global_session_for_this_block.is_none() {
has_events = HasEvents::No;
}
match has_events {
HasEvents::Notable | HasEvents::NonNotable => {
let global_session_for_this_block = global_session_for_this_block
.expect("global session for this block was None but still attempting to cosign it");
let global_session_info = GlobalSessions::get(&txn, global_session_for_this_block)
.expect("last global session intended wasn't saved to the database");
// Tell each set of their expectation to cosign this block
for set in global_session_info.sets {
log::debug!("{set:?} will be cosigning block #{block_number}");
IntendedCosigns::send(
&mut txn,
set,
&CosignIntent {
global_session: global_session_for_this_block,
block_number,
block_hash,
notable: has_events == HasEvents::Notable,
},
);
}
}
HasEvents::No => {}
}
// Populate a singular feed with every block's status for the evluator to work off of
BlockEvents::send(&mut txn, &(BlockEventData { block_number, has_events }));
// Mark this block as handled, meaning we should scan from the next block moving on
ScanCosignFrom::set(&mut txn, &(block_number + 1));
txn.commit();
}
Ok(start_block_number <= latest_block_number)
}
}
}

View File

@@ -0,0 +1,393 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{fmt::Debug, future::Future};
use std::{sync::Arc, collections::HashMap, time::Instant};
use blake2::{Digest, Blake2s256};
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client_serai::{
abi::{
primitives::{
BlockHash,
crypto::{Public, KeyPair},
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
},
Block,
},
Serai, State,
};
use serai_db::*;
use serai_task::*;
pub use serai_cosign_types::*;
/// The cosigns which are intended to be performed.
mod intend;
/// The evaluator of the cosigns.
mod evaluator;
/// The task to delay acknowledgement of the cosigns.
mod delay;
pub use delay::BROADCAST_FREQUENCY;
use delay::LatestCosignedBlockNumber;
/// A 'global session', defined as all validator sets used for cosigning at a given moment.
///
/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign
/// distinct blocks at distinct positions within a global session, we still identify the faults.
/*
There is the attack where a validator set is given an alternate blockchain with a key generation
event at block #n, while most validator sets are given a blockchain with a key generation event
at block number #(n+1). This prevents whoever has the alternate blockchain from verifying the
cosigns on the primary blockchain, and detecting the faults, if they use the keys as of the block
prior to the block being cosigned.
We solve this by binding cosigns to a global session ID, which has a specific start block, and
reading the keys from the start block. This means that so long as all validator sets agree on the
start of a global session, they can verify all cosigns produced by that session, regardless of
how it advances. Since agreeing on the start of a global session is mandated, there's no way to
have validator sets follow two distinct global sessions without breaking the bounds of the
cosigning protocol.
*/
#[derive(Debug, BorshSerialize, BorshDeserialize)]
pub(crate) struct GlobalSession {
pub(crate) start_block_number: u64,
pub(crate) sets: Vec<ExternalValidatorSet>,
pub(crate) keys: HashMap<ExternalNetworkId, Public>,
pub(crate) stakes: HashMap<ExternalNetworkId, u64>,
pub(crate) total_stake: u64,
}
impl GlobalSession {
fn id(mut cosigners: Vec<ExternalValidatorSet>) -> [u8; 32] {
cosigners.sort_by_key(|a| borsh::to_vec(a).unwrap());
Blake2s256::digest(borsh::to_vec(&cosigners).unwrap()).into()
}
}
/// If the block has events.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
enum HasEvents {
/// The block had a notable event.
///
/// This is a special case as blocks with key gen events change the keys used for cosigning, and
/// accordingly must be cosigned before we advance past them.
Notable,
/// The block had an non-notable event justifying a cosign.
NonNotable,
/// The block didn't have an event justifying a cosign.
No,
}
create_db! {
Cosign {
// The following are populated by the intend task and used throughout the library
// An index of Substrate blocks
SubstrateBlockHash: (block_number: u64) -> BlockHash,
// A mapping from a global session's ID to its relevant information.
GlobalSessions: (global_session: [u8; 32]) -> GlobalSession,
// The last block to be cosigned by a global session.
GlobalSessionsLastBlock: (global_session: [u8; 32]) -> u64,
// The latest global session intended.
//
// This is distinct from the latest global session for which we've evaluated the cosigns for.
LatestGlobalSessionIntended: () -> [u8; 32],
// The following are managed by the `intake_cosign` function present in this file
// The latest cosigned block for each network.
//
// This will only be populated with cosigns predating or during the most recent global session
// to have its start cosigned.
//
// The global session changes upon a notable block, causing each global session to have exactly
// one notable block. All validator sets will explicitly produce a cosign for their notable
// block, causing the latest cosigned block for a global session to either be the global
// session's notable cosigns or the network's latest cosigns.
NetworksLatestCosignedBlock: (
global_session: [u8; 32],
network: ExternalNetworkId
) -> SignedCosign,
// Cosigns received for blocks not locally recognized as finalized.
Faults: (global_session: [u8; 32]) -> Vec<SignedCosign>,
// The global session which faulted.
FaultedSession: () -> [u8; 32],
}
}
/// An object usable to request notable cosigns for a block.
pub trait RequestNotableCosigns: 'static + Send {
/// The error type which may be encountered when requesting notable cosigns.
type Error: Debug;
/// Request the notable cosigns for this global session.
fn request_notable_cosigns(
&self,
global_session: [u8; 32],
) -> impl Send + Future<Output = Result<(), Self::Error>>;
}
/// An error used to indicate the cosigning protocol has faulted.
#[derive(Debug)]
pub struct Faulted;
/// An error incurred while intaking a cosign.
#[derive(Debug)]
pub enum IntakeCosignError {
/// Cosign is for a not-yet-indexed block
NotYetIndexedBlock,
/// A later cosign for this cosigner has already been handled
StaleCosign,
/// The cosign's global session isn't recognized
UnrecognizedGlobalSession,
/// The cosign is for a block before its global session starts
BeforeGlobalSessionStart,
/// The cosign is for a block after its global session ends
AfterGlobalSessionEnd,
/// The cosign's signing network wasn't a participant in this global session
NonParticipatingNetwork,
/// The cosign had an invalid signature
InvalidSignature,
/// The cosign is for a global session which has yet to have its declaration block cosigned
FutureGlobalSession,
}
impl IntakeCosignError {
/// If this error is temporal to the local view
pub fn temporal(&self) -> bool {
match self {
IntakeCosignError::NotYetIndexedBlock |
IntakeCosignError::StaleCosign |
IntakeCosignError::UnrecognizedGlobalSession |
IntakeCosignError::FutureGlobalSession => true,
IntakeCosignError::BeforeGlobalSessionStart |
IntakeCosignError::AfterGlobalSessionEnd |
IntakeCosignError::NonParticipatingNetwork |
IntakeCosignError::InvalidSignature => false,
}
}
}
/// The interface to manage cosigning with.
pub struct Cosigning<D: Db> {
db: D,
}
impl<D: Db> Cosigning<D> {
/// Spawn the tasks to intend and evaluate cosigns.
///
/// The database specified must only be used with a singular instance of the Serai network, and
/// only used once at any given time.
pub fn spawn<R: RequestNotableCosigns>(
db: D,
serai: Arc<Serai>,
request: R,
tasks_to_run_upon_cosigning: Vec<TaskHandle>,
) -> Self {
let (intend_task, _intend_task_handle) = Task::new();
let (evaluator_task, evaluator_task_handle) = Task::new();
let (delay_task, delay_task_handle) = Task::new();
tokio::spawn(
(intend::CosignIntendTask { db: db.clone(), serai })
.continually_run(intend_task, vec![evaluator_task_handle]),
);
tokio::spawn(
(evaluator::CosignEvaluatorTask {
db: db.clone(),
request,
last_request_for_cosigns: Instant::now(),
})
.continually_run(evaluator_task, vec![delay_task_handle]),
);
tokio::spawn(
(delay::CosignDelayTask { db: db.clone() })
.continually_run(delay_task, tasks_to_run_upon_cosigning),
);
Self { db }
}
/// The latest cosigned block number.
pub fn latest_cosigned_block_number(getter: &impl Get) -> Result<u64, Faulted> {
if FaultedSession::get(getter).is_some() {
Err(Faulted)?;
}
Ok(LatestCosignedBlockNumber::get(getter).unwrap_or(0))
}
/// Fetch a cosigned Substrate block's hash by its block number.
pub fn cosigned_block(
getter: &impl Get,
block_number: u64,
) -> Result<Option<BlockHash>, Faulted> {
if block_number > Self::latest_cosigned_block_number(getter)? {
return Ok(None);
}
Ok(Some(
SubstrateBlockHash::get(getter, block_number).expect("cosigned block but didn't index it"),
))
}
/// Fetch the notable cosigns for a global session in order to respond to requests.
///
/// If this global session hasn't produced any notable cosigns, this will return the latest
/// cosigns for this session.
pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec<SignedCosign> {
let mut cosigns = vec![];
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) {
cosigns.push(cosign);
}
}
cosigns
}
/// The cosigns to rebroadcast every `BROADCAST_FREQUENCY` seconds.
///
/// This will be the most recent cosigns, in case the initial broadcast failed, or the faulty
/// cosigns, in case of a fault, to induce identification of the fault by others.
pub fn cosigns_to_rebroadcast(&self) -> Vec<SignedCosign> {
if let Some(faulted) = FaultedSession::get(&self.db) {
let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults");
// Also include all of our recognized-as-honest cosigns in an attempt to induce fault
// identification in those who see the faulty cosigns as honest
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) {
if cosign.cosign.global_session == faulted {
cosigns.push(cosign);
}
}
}
cosigns
} else {
let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else {
return vec![];
};
let mut cosigns = vec![];
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) {
cosigns.push(cosign);
}
}
cosigns
}
}
/// Intake a cosign.
//
// Takes `&mut self` as this should only be called once at any given moment.
pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result<(), IntakeCosignError> {
let cosign = &signed_cosign.cosign;
let network = cosign.cosigner;
// Check our indexed blockchain includes a block with this block number
let Some(our_block_hash) = SubstrateBlockHash::get(&self.db, cosign.block_number) else {
Err(IntakeCosignError::NotYetIndexedBlock)?
};
let faulty = cosign.block_hash != our_block_hash;
// Check this isn't a dated cosign within its global session (as it would be if rebroadcasted)
if !faulty {
if let Some(existing) =
NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network)
{
if existing.cosign.block_number >= cosign.block_number {
Err(IntakeCosignError::StaleCosign)?;
}
}
}
let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else {
Err(IntakeCosignError::UnrecognizedGlobalSession)?
};
// Check the cosigned block number is in range to the global session
if cosign.block_number < global_session.start_block_number {
// Cosign is for a block predating the global session
Err(IntakeCosignError::BeforeGlobalSessionStart)?;
}
if !faulty {
// This prevents a malicious validator set, on the same chain, from producing a cosign after
// their final block, replacing their notable cosign
if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) {
if cosign.block_number > last_block {
// Cosign is for a block after the last block this global session should have signed
Err(IntakeCosignError::AfterGlobalSessionEnd)?;
}
}
}
// Check the cosign's signature
{
let key =
*global_session.keys.get(&network).ok_or(IntakeCosignError::NonParticipatingNetwork)?;
if !signed_cosign.verify_signature(key) {
Err(IntakeCosignError::InvalidSignature)?;
}
}
// Since we verified this cosign's signature, and have a chain sufficiently long, handle the
// cosign
let mut txn = self.db.txn();
if !faulty {
// If this is for a future global session, we don't acknowledge this cosign at this time
let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&txn).unwrap_or(0);
// This global session starts the block *after* its declaration, so we want to check if the
// block declaring it was cosigned
if (global_session.start_block_number - 1) > latest_cosigned_block_number {
drop(txn);
return Err(IntakeCosignError::FutureGlobalSession);
}
// This is safe as it's in-range and newer, as prior checked since it isn't faulty
NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, signed_cosign);
} else {
let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]);
// Only handle this as a fault if this set wasn't prior faulty
if !faults.iter().any(|cosign| cosign.cosign.cosigner == network) {
faults.push(signed_cosign.clone());
Faults::set(&mut txn, cosign.global_session, &faults);
let mut weight_cosigned = 0;
for fault in &faults {
let stake = global_session
.stakes
.get(&fault.cosign.cosigner)
.expect("cosigner with recognized key didn't have a stake entry saved");
weight_cosigned += stake;
}
// Check if the sum weight means a fault has occurred
if weight_cosigned >= ((global_session.total_stake * 17) / 100) {
FaultedSession::set(&mut txn, &cosign.global_session);
}
}
}
txn.commit();
Ok(())
}
/// Receive intended cosigns to produce for this ExternalValidatorSet.
///
/// All cosigns intended, up to and including the next notable cosign, are returned.
///
/// This will drain the internal channel and not re-yield these intentions again.
pub fn intended_cosigns(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Vec<CosignIntent> {
let mut res: Vec<CosignIntent> = vec![];
// While we have yet to find a notable cosign...
while !res.last().map(|cosign| cosign.notable).unwrap_or(false) {
let Some(intent) = intend::IntendedCosigns::try_recv(txn, set) else { break };
res.push(intent);
}
res
}
}

View File

@@ -0,0 +1,25 @@
[package]
name = "serai-cosign-types"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -0,0 +1,72 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![deny(missing_docs)]
//! Types used when cosigning Serai. For more info, please see `serai-cosign`.
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{BlockHash, crypto::Public, network_id::ExternalNetworkId};
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
borsh::to_vec(self).unwrap()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
/// Verify a cosign's signature.
pub fn verify_signature(&self, signer: Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}

View File

@@ -0,0 +1,33 @@
[package]
name = "serai-coordinator-p2p"
version = "0.1.0"
description = "Serai coordinator's P2P abstraction"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/p2p"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-db = { path = "../../common/db", version = "0.1" }
serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] }
serai-cosign = { path = "../cosign" }
tributary-sdk = { path = "../tributary-sdk" }
futures-lite = { version = "2", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync", "macros"] }
log = { version = "0.4", default-features = false, features = ["std"] }
serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -0,0 +1,3 @@
# Serai Coordinator P2P
The P2P abstraction used by Serai's coordinator, and tasks over it.

View File

@@ -0,0 +1,42 @@
[package]
name = "serai-coordinator-libp2p-p2p"
version = "0.1.0"
description = "Serai coordinator's libp2p-based P2P backend"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/p2p/libp2p"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.87"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
async-trait = { version = "0.1", default-features = false }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client-serai = { path = "../../../substrate/client/serai", default-features = false }
serai-cosign = { path = "../../cosign" }
tributary-sdk = { path = "../../tributary-sdk" }
futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] }
libp2p = { version = "0.56", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] }
log = { version = "0.4", default-features = false, features = ["std"] }
serai-task = { path = "../../../common/task", version = "0.1" }
serai-coordinator-p2p = { path = "../" }

View File

@@ -0,0 +1,15 @@
AGPL-3.0-only license
Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as
published by the Free Software Foundation.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.

View File

@@ -0,0 +1,14 @@
# Serai Coordinator libp2p P2P
A libp2p-backed P2P instantiation for Serai's coordinator.
The libp2p swarm is limited to validators from the Serai network. The swarm
does not maintain any of its own peer finding/routing infrastructure, instead
relying on the Serai network's connection information to dial peers. This does
limit the listening peers to only the peers immediately reachable via the same
IP address (despite the two distinct services), not hidden behind a NAT, yet is
also quite simple and gives full control of who to connect to to us.
Peers are decided via the internal `DialTask` which aims to maintain a target
amount of peers for each external network. This ensures cosigns are able to
propagate across the external networks which sign them.

View File

@@ -0,0 +1,187 @@
use core::{pin::Pin, future::Future};
use std::io;
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
use blake2::{Digest, Blake2s256};
use schnorrkel::{Keypair, PublicKey, Signature};
use serai_client_serai::abi::primitives::crypto::Public;
use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use libp2p::{
core::upgrade::{UpgradeInfo, InboundConnectionUpgrade, OutboundConnectionUpgrade},
identity::{self, PeerId},
noise,
};
use crate::peer_id_from_public;
const PROTOCOL: &str = "/serai/coordinator/validators";
#[derive(Clone)]
pub(crate) struct OnlyValidators {
pub(crate) serai_key: Zeroizing<Keypair>,
pub(crate) noise_keypair: identity::Keypair,
}
impl OnlyValidators {
/// The ephemeral challenge protocol for authentication.
///
/// We use ephemeral challenges to prevent replaying signatures from historic sessions.
///
/// We don't immediately send the challenge. We only send a commitment to it. This prevents our
/// remote peer from choosing their challenge in response to our challenge, in case there was any
/// benefit to doing so.
async fn challenges<S: 'static + Send + Unpin + AsyncRead + AsyncWrite>(
socket: &mut noise::Output<S>,
) -> io::Result<([u8; 32], [u8; 32])> {
let mut our_challenge = [0; 32];
OsRng.fill_bytes(&mut our_challenge);
// Write the hash of our challenge
socket.write_all(&Blake2s256::digest(our_challenge)).await?;
// Read the hash of their challenge
let mut their_challenge_commitment = [0; 32];
socket.read_exact(&mut their_challenge_commitment).await?;
// Reveal our challenge
socket.write_all(&our_challenge).await?;
// Read their challenge
let mut their_challenge = [0; 32];
socket.read_exact(&mut their_challenge).await?;
// Verify their challenge
if <[u8; 32]>::from(Blake2s256::digest(their_challenge)) != their_challenge_commitment {
Err(io::Error::other("challenge didn't match challenge commitment"))?;
}
Ok((our_challenge, their_challenge))
}
// We sign the two noise peer IDs and the ephemeral challenges.
//
// Signing the noise peer IDs ensures we're authenticating this noise connection. The only
// expectations placed on noise are for it to prevent a MITM from impersonating the other end or
// modifying any messages sent.
//
// Signing the ephemeral challenges prevents any replays. While that should be unnecessary, as
// noise MAY prevent replays across sessions (even when the same key is used), and noise IDs
// shouldn't be reused (so it should be fine to reuse an existing signature for these noise IDs),
// it doesn't hurt.
async fn authenticate<S: 'static + Send + Unpin + AsyncRead + AsyncWrite>(
&self,
socket: &mut noise::Output<S>,
dialer_peer_id: PeerId,
dialer_challenge: [u8; 32],
listener_peer_id: PeerId,
listener_challenge: [u8; 32],
) -> io::Result<PeerId> {
// Write our public key
socket.write_all(&self.serai_key.public.to_bytes()).await?;
let msg = borsh::to_vec(&(
dialer_peer_id.to_bytes(),
dialer_challenge,
listener_peer_id.to_bytes(),
listener_challenge,
))
.unwrap();
let signature = self.serai_key.sign_simple(PROTOCOL.as_bytes(), &msg);
socket.write_all(&signature.to_bytes()).await?;
let mut public_key_and_sig = [0; 96];
socket.read_exact(&mut public_key_and_sig).await?;
let public_key = PublicKey::from_bytes(&public_key_and_sig[.. 32])
.map_err(|_| io::Error::other("invalid public key"))?;
let sig = Signature::from_bytes(&public_key_and_sig[32 ..])
.map_err(|_| io::Error::other("invalid signature serialization"))?;
public_key
.verify_simple(PROTOCOL.as_bytes(), &msg, &sig)
.map_err(|_| io::Error::other("invalid signature"))?;
Ok(peer_id_from_public(Public(public_key.to_bytes())))
}
}
impl UpgradeInfo for OnlyValidators {
type Info = <noise::Config as UpgradeInfo>::Info;
type InfoIter = <noise::Config as UpgradeInfo>::InfoIter;
fn protocol_info(&self) -> Self::InfoIter {
// A keypair only causes an error if its sign operation fails, which is only possible with RSA,
// which isn't used within this codebase
noise::Config::new(&self.noise_keypair).unwrap().protocol_info()
}
}
impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> InboundConnectionUpgrade<S>
for OnlyValidators
{
type Output = (PeerId, noise::Output<S>);
type Error = io::Error;
type Future = Pin<Box<dyn Send + Future<Output = Result<Self::Output, Self::Error>>>>;
fn upgrade_inbound(
self,
socket: S,
info: <Self as UpgradeInfo>::Info,
) -> <Self as InboundConnectionUpgrade<S>>::Future {
Box::pin(async move {
let (dialer_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair)
.unwrap()
.upgrade_inbound(socket, info)
.await
.map_err(io::Error::other)?;
let (our_challenge, dialer_challenge) = OnlyValidators::challenges(&mut socket).await?;
let dialer_serai_validator = self
.authenticate(
&mut socket,
dialer_noise_peer_id,
dialer_challenge,
PeerId::from_public_key(&self.noise_keypair.public()),
our_challenge,
)
.await?;
Ok((dialer_serai_validator, socket))
})
}
}
impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> OutboundConnectionUpgrade<S>
for OnlyValidators
{
type Output = (PeerId, noise::Output<S>);
type Error = io::Error;
type Future = Pin<Box<dyn Send + Future<Output = Result<Self::Output, Self::Error>>>>;
fn upgrade_outbound(
self,
socket: S,
info: <Self as UpgradeInfo>::Info,
) -> <Self as OutboundConnectionUpgrade<S>>::Future {
Box::pin(async move {
let (listener_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair)
.unwrap()
.upgrade_outbound(socket, info)
.await
.map_err(io::Error::other)?;
let (our_challenge, listener_challenge) = OnlyValidators::challenges(&mut socket).await?;
let listener_serai_validator = self
.authenticate(
&mut socket,
PeerId::from_public_key(&self.noise_keypair.public()),
our_challenge,
listener_noise_peer_id,
listener_challenge,
)
.await?;
Ok((listener_serai_validator, socket))
})
}
}

View File

@@ -0,0 +1,134 @@
use core::{future::Future, str::FromStr};
use std::{sync::Arc, collections::HashSet};
use rand_core::{RngCore, OsRng};
use tokio::sync::mpsc;
use serai_client_serai::{RpcError, Serai};
use libp2p::{
core::multiaddr::{Protocol, Multiaddr},
swarm::dial_opts::DialOpts,
};
use serai_task::ContinuallyRan;
use crate::{PORT, Peers, validators::Validators};
const TARGET_PEERS_PER_NETWORK: usize = 5;
/*
If we only tracked the target amount of peers per network, we'd risk being eclipsed by an
adversary who immediately connects to us with their array of validators upon our boot. Their
array would satisfy our target amount of peers, so we'd never seek more, enabling the adversary
to be the only entity we peered with.
We solve this by additionally requiring an explicit amount of peers we dialed. That means we
randomly chose to connect to these peers.
*/
// TODO const TARGET_DIALED_PEERS_PER_NETWORK: usize = 3;
pub(crate) struct DialTask {
serai: Arc<Serai>,
validators: Validators,
peers: Peers,
to_dial: mpsc::UnboundedSender<DialOpts>,
}
impl DialTask {
pub(crate) fn new(
serai: Arc<Serai>,
peers: Peers,
to_dial: mpsc::UnboundedSender<DialOpts>,
) -> Self {
DialTask { serai: serai.clone(), validators: Validators::new(serai).0, peers, to_dial }
}
}
impl ContinuallyRan for DialTask {
// Only run every five minutes, not the default of every five seconds
const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60;
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
self.validators.update().await?;
// If any of our peers is lacking, try to connect to more
let mut dialed = false;
let peer_counts = self
.peers
.peers
.read()
.await
.iter()
.map(|(network, peers)| (*network, peers.len()))
.collect::<Vec<_>>();
for (network, peer_count) in peer_counts {
/*
If we don't have the target amount of peers, and we don't have all the validators in the
set but one, attempt to connect to more validators within this set.
The latter clause is so if there's a set with only 3 validators, we don't infinitely try
to connect to the target amount of peers for this network as we never will. Instead, we
only try to connect to most of the validators actually present.
*/
if (peer_count < TARGET_PEERS_PER_NETWORK) &&
(peer_count <
self
.validators
.by_network()
.get(&network)
.map(HashSet::len)
.unwrap_or(0)
.saturating_sub(1))
{
let mut potential_peers = self.serai.p2p_validators(network).await?;
for _ in 0 .. (TARGET_PEERS_PER_NETWORK - peer_count) {
if potential_peers.is_empty() {
break;
}
let index_to_dial =
usize::try_from(OsRng.next_u64() % u64::try_from(potential_peers.len()).unwrap())
.unwrap();
let randomly_selected_peer = potential_peers.swap_remove(index_to_dial);
let Ok(randomly_selected_peer) = libp2p::Multiaddr::from_str(&randomly_selected_peer)
else {
log::error!(
"peer from substrate wasn't a valid `Multiaddr`: {randomly_selected_peer}"
);
continue;
};
log::info!("found peer from substrate: {randomly_selected_peer}");
// Map the peer from a Substrate P2P network peer to a Coordinator P2P network peer
let mapped_peer = randomly_selected_peer
.into_iter()
.filter_map(|protocol| match protocol {
// Drop PeerIds from the Substrate P2p network
Protocol::P2p(_) => None,
// Use our own TCP port
Protocol::Tcp(_) => Some(Protocol::Tcp(PORT)),
// Pass-through any other specifications (IPv4, IPv6, etc)
other => Some(other),
})
.collect::<Multiaddr>();
log::debug!("mapped found peer: {mapped_peer}");
self
.to_dial
.send(DialOpts::unknown_peer_id().address(mapped_peer).build())
.expect("dial receiver closed?");
dialed = true;
}
}
}
Ok(dialed)
}
}
}

View File

@@ -0,0 +1,75 @@
use core::time::Duration;
use blake2::{Digest, Blake2s256};
use borsh::{BorshSerialize, BorshDeserialize};
use libp2p::gossipsub::{
IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, IdentityTransform,
AllowAllSubscriptionFilter, Behaviour,
};
pub use libp2p::gossipsub::Event;
use serai_cosign::SignedCosign;
// Block size limit + 16 KB of space for signatures/metadata
pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary_sdk::BLOCK_SIZE_LIMIT + 16384;
const LIBP2P_PROTOCOL: &str = "/serai/coordinator/gossip/1.0.0";
const BASE_TOPIC: &str = "/";
fn topic_for_tributary(tributary: [u8; 32]) -> IdentTopic {
IdentTopic::new(format!("/tributary/{}", hex::encode(tributary)))
}
#[derive(Clone, BorshSerialize, BorshDeserialize)]
pub(crate) enum Message {
Tributary { tributary: [u8; 32], message: Vec<u8> },
Cosign(SignedCosign),
}
impl Message {
pub(crate) fn topic(&self) -> IdentTopic {
match self {
Message::Tributary { tributary, .. } => topic_for_tributary(*tributary),
Message::Cosign(_) => IdentTopic::new(BASE_TOPIC),
}
}
}
pub(crate) type Behavior = Behaviour<IdentityTransform, AllowAllSubscriptionFilter>;
pub(crate) fn new_behavior() -> Behavior {
// The latency used by the Tendermint protocol, used here as the gossip epoch duration
// libp2p-rs defaults to 1 second, whereas ours will be ~2
let heartbeat_interval = tributary_sdk::tendermint::LATENCY_TIME;
// The amount of heartbeats which will occur within a single Tributary block
let heartbeats_per_block =
tributary_sdk::tendermint::TARGET_BLOCK_TIME.div_ceil(heartbeat_interval);
// libp2p-rs defaults to 5, whereas ours will be ~8
let heartbeats_to_keep = 2 * heartbeats_per_block;
// libp2p-rs defaults to 3 whereas ours will be ~4
let heartbeats_to_gossip = heartbeats_per_block;
let config = ConfigBuilder::default()
.protocol_id_prefix(LIBP2P_PROTOCOL)
.history_length(usize::try_from(heartbeats_to_keep).unwrap())
.history_gossip(usize::try_from(heartbeats_to_gossip).unwrap())
.heartbeat_interval(Duration::from_millis(heartbeat_interval.into()))
.max_transmit_size(MAX_LIBP2P_GOSSIP_MESSAGE_SIZE)
.duplicate_cache_time(Duration::from_millis((heartbeats_to_keep * heartbeat_interval).into()))
.validation_mode(ValidationMode::Anonymous)
// Uses a content based message ID to avoid duplicates as much as possible
.message_id_fn(|msg| {
MessageId::new(&Blake2s256::digest([msg.topic.as_str().as_bytes(), &msg.data].concat()))
})
.build();
let mut gossip = Behavior::new(MessageAuthenticity::Anonymous, config.unwrap()).unwrap();
// Subscribe to the base topic
let topic = IdentTopic::new(BASE_TOPIC);
let _ = gossip.subscribe(&topic);
gossip
}

View File

@@ -0,0 +1,417 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{future::Future, time::Duration};
use std::{
sync::Arc,
collections::{HashSet, HashMap},
};
use rand_core::{RngCore, OsRng};
use zeroize::Zeroizing;
use schnorrkel::Keypair;
use serai_client_serai::{
abi::primitives::{
crypto::Public, network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet,
},
Serai,
};
use tokio::sync::{mpsc, oneshot, Mutex, RwLock};
use serai_task::{Task, ContinuallyRan};
use serai_cosign::SignedCosign;
use libp2p::{
multihash::Multihash,
identity::{self, PeerId},
tcp::Config as TcpConfig,
yamux, allow_block_list,
connection_limits::{self, ConnectionLimits},
swarm::NetworkBehaviour,
SwarmBuilder,
};
use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit};
/// A struct to sync the validators from the Serai node in order to keep track of them.
mod validators;
use validators::UpdateValidatorsTask;
/// The authentication protocol upgrade to limit the P2P network to active validators.
mod authenticate;
use authenticate::OnlyValidators;
/// The ping behavior, used to ensure connection latency is below the limit
mod ping;
/// The request-response messages and behavior
mod reqres;
use reqres::{InboundRequestId, Request, Response};
/// The gossip messages and behavior
mod gossip;
use gossip::Message;
/// The swarm task, running it and dispatching to/from it
mod swarm;
use swarm::SwarmTask;
/// The dial task, to find new peers to connect to
mod dial;
use dial::DialTask;
const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o')
fn peer_id_from_public(public: Public) -> PeerId {
// 0 represents the identity Multihash, that no hash was performed
// It's an internal constant so we can't refer to the constant inside libp2p
PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap()
}
/// The representation of a peer.
pub struct Peer<'a> {
outbound_requests: &'a mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender<Response>)>,
id: PeerId,
}
impl serai_coordinator_p2p::Peer<'_> for Peer<'_> {
fn send_heartbeat(
&self,
heartbeat: Heartbeat,
) -> impl Send + Future<Output = Option<Vec<TributaryBlockWithCommit>>> {
async move {
const HEARTBEAT_TIMEOUT: Duration = Duration::from_secs(5);
let request = Request::Heartbeat(heartbeat);
let (sender, receiver) = oneshot::channel();
self
.outbound_requests
.send((self.id, request, sender))
.expect("outbound requests recv channel was dropped?");
if let Ok(Ok(Response::Blocks(blocks))) =
tokio::time::timeout(HEARTBEAT_TIMEOUT, receiver).await
{
Some(blocks)
} else {
None
}
}
}
}
#[derive(Clone)]
struct Peers {
peers: Arc<RwLock<HashMap<ExternalNetworkId, HashSet<PeerId>>>>,
}
// Consider adding identify/kad/autonat/rendevous/(relay + dcutr). While we currently use the Serai
// network for peers, we could use it solely for bootstrapping/as a fallback.
#[derive(NetworkBehaviour)]
struct Behavior {
// Used to only allow Serai validators as peers
allow_list: allow_block_list::Behaviour<allow_block_list::AllowedPeers>,
// Used to limit each peer to a single connection
connection_limits: connection_limits::Behaviour,
// Used to ensure connection latency is within tolerances
ping: ping::Behavior,
// Used to request data from specific peers
reqres: reqres::Behavior,
// Used to broadcast messages to all other peers subscribed to a topic
gossip: gossip::Behavior,
}
#[allow(clippy::type_complexity)]
struct Libp2pInner {
peers: Peers,
gossip: mpsc::UnboundedSender<Message>,
outbound_requests: mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender<Response>)>,
tributary_gossip: Mutex<mpsc::UnboundedReceiver<([u8; 32], Vec<u8>)>>,
signed_cosigns: Mutex<mpsc::UnboundedReceiver<SignedCosign>>,
signed_cosigns_send: mpsc::UnboundedSender<SignedCosign>,
heartbeat_requests:
Mutex<mpsc::UnboundedReceiver<(InboundRequestId, ExternalValidatorSet, [u8; 32])>>,
notable_cosign_requests: Mutex<mpsc::UnboundedReceiver<(InboundRequestId, [u8; 32])>>,
inbound_request_responses: mpsc::UnboundedSender<(InboundRequestId, Response)>,
}
/// The libp2p-backed P2P implementation.
///
/// The P2p trait implementation does not support backpressure and is expected to be fully
/// utilized. Failure to poll the entire API will cause unbounded memory growth.
#[derive(Clone)]
pub struct Libp2p(Arc<Libp2pInner>);
impl Libp2p {
/// Create a new libp2p-backed P2P instance.
///
/// This will spawn all of the internal tasks necessary for functioning.
pub fn new(serai_key: &Zeroizing<Keypair>, serai: Arc<Serai>) -> Libp2p {
// Define the object we track peers with
let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) };
// Define the dial task
let (dial_task_def, dial_task) = Task::new();
let (to_dial_send, to_dial_recv) = mpsc::unbounded_channel();
tokio::spawn(
DialTask::new(serai.clone(), peers.clone(), to_dial_send)
.continually_run(dial_task_def, vec![]),
);
let swarm = {
let new_only_validators = |noise_keypair: &identity::Keypair| -> Result<_, ()> {
Ok(OnlyValidators { serai_key: serai_key.clone(), noise_keypair: noise_keypair.clone() })
};
let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519())
.with_tokio()
.with_tcp(TcpConfig::default().nodelay(true), new_only_validators, yamux::Config::default)
.unwrap()
.with_behaviour(|_| Behavior {
allow_list: allow_block_list::Behaviour::default(),
// Limit each per to a single connection
connection_limits: connection_limits::Behaviour::new(
ConnectionLimits::default().with_max_established_per_peer(Some(1)),
),
ping: ping::new_behavior(),
reqres: reqres::new_behavior(),
gossip: gossip::new_behavior(),
})
.unwrap()
.with_swarm_config(|config| {
config
.with_idle_connection_timeout(ping::INTERVAL + ping::TIMEOUT + Duration::from_secs(5))
})
.build();
swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap();
swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap();
swarm
};
let (swarm_validators, validator_changes) = UpdateValidatorsTask::spawn(serai);
let (gossip_send, gossip_recv) = mpsc::unbounded_channel();
let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel();
let (tributary_gossip_send, tributary_gossip_recv) = mpsc::unbounded_channel();
let (outbound_requests_send, outbound_requests_recv) = mpsc::unbounded_channel();
let (heartbeat_requests_send, heartbeat_requests_recv) = mpsc::unbounded_channel();
let (notable_cosign_requests_send, notable_cosign_requests_recv) = mpsc::unbounded_channel();
let (inbound_request_responses_send, inbound_request_responses_recv) =
mpsc::unbounded_channel();
// Create the swarm task
SwarmTask::spawn(
dial_task,
to_dial_recv,
swarm_validators,
validator_changes,
peers.clone(),
swarm,
gossip_recv,
signed_cosigns_send.clone(),
tributary_gossip_send,
outbound_requests_recv,
heartbeat_requests_send,
notable_cosign_requests_send,
inbound_request_responses_recv,
);
Libp2p(Arc::new(Libp2pInner {
peers,
gossip: gossip_send,
outbound_requests: outbound_requests_send,
tributary_gossip: Mutex::new(tributary_gossip_recv),
signed_cosigns: Mutex::new(signed_cosigns_recv),
signed_cosigns_send,
heartbeat_requests: Mutex::new(heartbeat_requests_recv),
notable_cosign_requests: Mutex::new(notable_cosign_requests_recv),
inbound_request_responses: inbound_request_responses_send,
}))
}
}
impl tributary_sdk::P2p for Libp2p {
fn broadcast(&self, tributary: [u8; 32], message: Vec<u8>) -> impl Send + Future<Output = ()> {
async move {
self
.0
.gossip
.send(Message::Tributary { tributary, message })
.expect("gossip recv channel was dropped?");
}
}
}
impl serai_cosign::RequestNotableCosigns for Libp2p {
type Error = ();
fn request_notable_cosigns(
&self,
global_session: [u8; 32],
) -> impl Send + Future<Output = Result<(), Self::Error>> {
async move {
const AMOUNT_OF_PEERS_TO_REQUEST_FROM: usize = 3;
const NOTABLE_COSIGNS_TIMEOUT: Duration = Duration::from_secs(5);
let request = Request::NotableCosigns { global_session };
let peers = self.0.peers.peers.read().await.clone();
// HashSet of all peers
let peers = peers.into_values().flat_map(<_>::into_iter).collect::<HashSet<_>>();
// Vec of all peers
let mut peers = peers.into_iter().collect::<Vec<_>>();
let mut channels = Vec::with_capacity(AMOUNT_OF_PEERS_TO_REQUEST_FROM);
for _ in 0 .. AMOUNT_OF_PEERS_TO_REQUEST_FROM {
if peers.is_empty() {
break;
}
let i = usize::try_from(OsRng.next_u64() % u64::try_from(peers.len()).unwrap()).unwrap();
let peer = peers.swap_remove(i);
let (sender, receiver) = oneshot::channel();
self
.0
.outbound_requests
.send((peer, request, sender))
.expect("outbound requests recv channel was dropped?");
channels.push(receiver);
}
// We could reduce our latency by using FuturesUnordered here but the latency isn't a concern
for channel in channels {
if let Ok(Ok(Response::NotableCosigns(cosigns))) =
tokio::time::timeout(NOTABLE_COSIGNS_TIMEOUT, channel).await
{
for cosign in cosigns {
self
.0
.signed_cosigns_send
.send(cosign)
.expect("signed_cosigns recv in this object was dropped?");
}
}
}
Ok(())
}
}
}
impl serai_coordinator_p2p::P2p for Libp2p {
type Peer<'a> = Peer<'a>;
fn peers(&self, network: ExternalNetworkId) -> impl Send + Future<Output = Vec<Self::Peer<'_>>> {
async move {
let Some(peer_ids) = self.0.peers.peers.read().await.get(&network).cloned() else {
return vec![];
};
let mut res = vec![];
for id in peer_ids {
res.push(Peer { outbound_requests: &self.0.outbound_requests, id });
}
res
}
}
fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future<Output = ()> {
async move {
self.0.gossip.send(Message::Cosign(cosign)).expect("gossip recv channel was dropped?");
}
}
fn heartbeat(
&self,
) -> impl Send + Future<Output = (Heartbeat, oneshot::Sender<Vec<TributaryBlockWithCommit>>)> {
async move {
let (request_id, set, latest_block_hash) = self
.0
.heartbeat_requests
.lock()
.await
.recv()
.await
.expect("heartbeat_requests_send was dropped?");
let (sender, receiver) = oneshot::channel();
tokio::spawn({
let respond = self.0.inbound_request_responses.clone();
async move {
// The swarm task expects us to respond to every request. If the caller drops this
// channel, we'll receive `Err` and respond with `vec![]`, safely satisfying that bound
// without requiring the caller send a value down this channel
let response = if let Ok(blocks) = receiver.await {
Response::Blocks(blocks)
} else {
Response::Blocks(vec![])
};
respond
.send((request_id, response))
.expect("inbound_request_responses_recv was dropped?");
}
});
(Heartbeat { set, latest_block_hash }, sender)
}
}
fn notable_cosigns_request(
&self,
) -> impl Send + Future<Output = ([u8; 32], oneshot::Sender<Vec<SignedCosign>>)> {
async move {
let (request_id, global_session) = self
.0
.notable_cosign_requests
.lock()
.await
.recv()
.await
.expect("notable_cosign_requests_send was dropped?");
let (sender, receiver) = oneshot::channel();
tokio::spawn({
let respond = self.0.inbound_request_responses.clone();
async move {
let response = if let Ok(notable_cosigns) = receiver.await {
Response::NotableCosigns(notable_cosigns)
} else {
Response::NotableCosigns(vec![])
};
respond
.send((request_id, response))
.expect("inbound_request_responses_recv was dropped?");
}
});
(global_session, sender)
}
}
fn tributary_message(&self) -> impl Send + Future<Output = ([u8; 32], Vec<u8>)> {
async move {
self.0.tributary_gossip.lock().await.recv().await.expect("tributary_gossip send was dropped?")
}
}
fn cosign(&self) -> impl Send + Future<Output = SignedCosign> {
async move {
self
.0
.signed_cosigns
.lock()
.await
.recv()
.await
.expect("signed_cosigns couldn't recv despite send in same object?")
}
}
}

View File

@@ -0,0 +1,17 @@
use core::time::Duration;
use tributary_sdk::tendermint::LATENCY_TIME;
use libp2p::ping::{self, Config, Behaviour};
pub use ping::Event;
pub(crate) const INTERVAL: Duration = Duration::from_secs(30);
// LATENCY_TIME represents the maximum latency for message delivery. Sending the ping, and
// receiving the pong, each have to occur within this time bound to validate the connection. We
// enforce that, as best we can, by requiring the round-trip be within twice the allowed latency.
pub(crate) const TIMEOUT: Duration = Duration::from_millis((2 * LATENCY_TIME) as u64);
pub(crate) type Behavior = Behaviour;
pub(crate) fn new_behavior() -> Behavior {
Behavior::new(Config::default().with_interval(INTERVAL).with_timeout(TIMEOUT))
}

View File

@@ -0,0 +1,134 @@
use core::{fmt, time::Duration};
use std::io;
use async_trait::async_trait;
use borsh::{BorshSerialize, BorshDeserialize};
use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use libp2p::request_response::{
self, Codec as CodecTrait, Event as GenericEvent, Config, Behaviour, ProtocolSupport,
};
pub use request_response::{InboundRequestId, Message};
use serai_cosign::SignedCosign;
use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit};
/// The maximum message size for the request-response protocol
// This is derived from the heartbeat message size as it's our largest message
pub(crate) const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize =
1024 + serai_coordinator_p2p::heartbeat::BATCH_SIZE_LIMIT;
const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0";
/// Requests which can be made via the request-response protocol.
#[derive(Clone, Copy, Debug, BorshSerialize, BorshDeserialize)]
pub(crate) enum Request {
/// A heartbeat informing our peers of our latest block, for the specified blockchain, on regular
/// intervals.
///
/// If our peers have more blocks than us, they're expected to respond with those blocks.
Heartbeat(Heartbeat),
/// A request for the notable cosigns for a global session.
NotableCosigns { global_session: [u8; 32] },
}
/// Responses which can be received via the request-response protocol.
#[derive(Clone, BorshSerialize, BorshDeserialize)]
pub(crate) enum Response {
None,
Blocks(Vec<TributaryBlockWithCommit>),
NotableCosigns(Vec<SignedCosign>),
}
impl fmt::Debug for Response {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Response::None => fmt.debug_struct("Response::None").finish(),
Response::Blocks(_) => fmt.debug_struct("Response::Block").finish_non_exhaustive(),
Response::NotableCosigns(_) => {
fmt.debug_struct("Response::NotableCosigns").finish_non_exhaustive()
}
}
}
}
/// The codec used for the request-response protocol.
///
/// We don't use CBOR or JSON, but use borsh to create `Vec<u8>`s we then length-prefix. While
/// ideally, we'd use borsh directly with the `io` traits defined here, they're async and there
/// isn't an amenable API within borsh for incremental deserialization.
#[derive(Default, Clone, Copy, Debug)]
pub(crate) struct Codec;
impl Codec {
async fn read<M: BorshDeserialize>(io: &mut (impl Unpin + AsyncRead)) -> io::Result<M> {
let mut len = [0; 4];
io.read_exact(&mut len).await?;
let len = usize::try_from(u32::from_le_bytes(len)).expect("not at least a 32-bit platform?");
if len > MAX_LIBP2P_REQRES_MESSAGE_SIZE {
Err(io::Error::other("request length exceeded MAX_LIBP2P_REQRES_MESSAGE_SIZE"))?;
}
// This may be a non-trivial allocation easily causable
// While we could chunk the read, meaning we only perform the allocation as bandwidth is used,
// the max message size should be sufficiently sane
let mut buf = vec![0; len];
io.read_exact(&mut buf).await?;
let mut buf = buf.as_slice();
let res = M::deserialize(&mut buf)?;
if !buf.is_empty() {
Err(io::Error::other("p2p message had extra data appended to it"))?;
}
Ok(res)
}
async fn write(io: &mut (impl Unpin + AsyncWrite), msg: &impl BorshSerialize) -> io::Result<()> {
let msg = borsh::to_vec(msg).unwrap();
io.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await?;
io.write_all(&msg).await
}
}
#[async_trait]
impl CodecTrait for Codec {
type Protocol = &'static str;
type Request = Request;
type Response = Response;
async fn read_request<R: Send + Unpin + AsyncRead>(
&mut self,
_: &Self::Protocol,
io: &mut R,
) -> io::Result<Request> {
Self::read(io).await
}
async fn read_response<R: Send + Unpin + AsyncRead>(
&mut self,
_: &Self::Protocol,
io: &mut R,
) -> io::Result<Response> {
Self::read(io).await
}
async fn write_request<W: Send + Unpin + AsyncWrite>(
&mut self,
_: &Self::Protocol,
io: &mut W,
req: Request,
) -> io::Result<()> {
Self::write(io, &req).await
}
async fn write_response<W: Send + Unpin + AsyncWrite>(
&mut self,
_: &Self::Protocol,
io: &mut W,
res: Response,
) -> io::Result<()> {
Self::write(io, &res).await
}
}
pub(crate) type Event = GenericEvent<Request, Response>;
pub(crate) type Behavior = Behaviour<Codec>;
pub(crate) fn new_behavior() -> Behavior {
let config = Config::default().with_request_timeout(Duration::from_secs(5));
Behavior::new([(PROTOCOL, ProtocolSupport::Full)], config)
}

View File

@@ -0,0 +1,360 @@
use std::{
sync::Arc,
collections::{HashSet, HashMap},
time::{Duration, Instant},
};
use borsh::BorshDeserialize;
use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use tokio::sync::{mpsc, oneshot, RwLock};
use serai_task::TaskHandle;
use serai_cosign::SignedCosign;
use futures_util::StreamExt;
use libp2p::{
identity::PeerId,
request_response::{InboundRequestId, OutboundRequestId, ResponseChannel},
swarm::{dial_opts::DialOpts, SwarmEvent, Swarm},
};
use serai_coordinator_p2p::Heartbeat;
use crate::{
Peers, BehaviorEvent, Behavior,
validators::{self, Validators},
ping,
reqres::{self, Request, Response},
gossip,
};
const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60);
/*
`SwarmTask` handles everything we need the `Swarm` object for. The goal is to minimize the
contention on this task. Unfortunately, the `Swarm` object itself is needed for a variety of
purposes making this a rather large task.
Responsibilities include:
- Actually dialing new peers (the selection process occurs in another task)
- Maintaining the peers structure (as we need the Swarm object to see who our peers are)
- Gossiping messages
- Dispatching gossiped messages
- Sending requests
- Dispatching responses to requests
- Dispatching received requests
- Sending responses
*/
pub(crate) struct SwarmTask {
dial_task: TaskHandle,
to_dial: mpsc::UnboundedReceiver<DialOpts>,
last_dial_task_run: Instant,
validators: Arc<RwLock<Validators>>,
validator_changes: mpsc::UnboundedReceiver<validators::Changes>,
peers: Peers,
rebuild_peers_at: Instant,
swarm: Swarm<Behavior>,
gossip: mpsc::UnboundedReceiver<gossip::Message>,
signed_cosigns: mpsc::UnboundedSender<SignedCosign>,
tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec<u8>)>,
outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender<Response>)>,
outbound_request_responses: HashMap<OutboundRequestId, oneshot::Sender<Response>>,
inbound_request_response_channels: HashMap<InboundRequestId, ResponseChannel<Response>>,
heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ExternalValidatorSet, [u8; 32])>,
notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>,
inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>,
}
impl SwarmTask {
fn handle_gossip(&mut self, event: gossip::Event) {
match event {
gossip::Event::Message { message, .. } => {
let Ok(message) = gossip::Message::deserialize(&mut message.data.as_slice()) else {
// TODO: Penalize the PeerId which created this message, which requires authenticating
// each message OR moving to explicit acknowledgement before re-gossiping
return;
};
match message {
gossip::Message::Tributary { tributary, message } => {
let _: Result<_, _> = self.tributary_gossip.send((tributary, message));
}
gossip::Message::Cosign(signed_cosign) => {
let _: Result<_, _> = self.signed_cosigns.send(signed_cosign);
}
}
}
gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}
gossip::Event::GossipsubNotSupported { peer_id } |
gossip::Event::SlowPeer { peer_id, .. } => {
let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id);
}
}
}
fn handle_reqres(&mut self, event: reqres::Event) {
match event {
reqres::Event::Message { message, .. } => match message {
reqres::Message::Request { request_id, request, channel } => match request {
reqres::Request::Heartbeat(Heartbeat { set, latest_block_hash }) => {
self.inbound_request_response_channels.insert(request_id, channel);
let _: Result<_, _> =
self.heartbeat_requests.send((request_id, set, latest_block_hash));
}
reqres::Request::NotableCosigns { global_session } => {
self.inbound_request_response_channels.insert(request_id, channel);
let _: Result<_, _> = self.notable_cosign_requests.send((request_id, global_session));
}
},
reqres::Message::Response { request_id, response } => {
if let Some(channel) = self.outbound_request_responses.remove(&request_id) {
let _: Result<_, _> = channel.send(response);
}
}
},
reqres::Event::OutboundFailure { request_id, .. } => {
// Send None as the response for the request
if let Some(channel) = self.outbound_request_responses.remove(&request_id) {
let _: Result<_, _> = channel.send(Response::None);
}
}
reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {}
}
}
async fn run(mut self) {
loop {
let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now());
tokio::select! {
// If the validators have changed, update the allow list
validator_changes = self.validator_changes.recv() => {
let validator_changes = validator_changes.expect("validators update task shut down?");
let behavior = &mut self.swarm.behaviour_mut().allow_list;
for removed in validator_changes.removed {
behavior.disallow_peer(removed);
}
for added in validator_changes.added {
behavior.allow_peer(added);
}
}
// Dial peers we're instructed to
dial_opts = self.to_dial.recv() => {
let dial_opts = dial_opts.expect("DialTask was closed?");
let _: Result<_, _> = self.swarm.dial(dial_opts);
}
/*
Rebuild the peers every 10 minutes.
This protects against any race conditions/edge cases we have in our logic to track peers,
along with unrepresented behavior such as when a peer changes the networks they're active
in. This lets the peer tracking logic simply be 'good enough' to not become horribly
corrupt over the span of `TIME_BETWEEN_REBUILD_PEERS`.
We also use this to disconnect all peers who are no longer active in any network.
*/
() = tokio::time::sleep(time_till_rebuild_peers) => {
let validators_by_network = self.validators.read().await.by_network().clone();
let connected_peers = self.swarm.connected_peers().copied().collect::<HashSet<_>>();
// Build the new peers object
let mut peers = HashMap::new();
for (network, validators) in validators_by_network {
peers.insert(network, validators.intersection(&connected_peers).copied().collect());
}
// Write the new peers object
*self.peers.peers.write().await = peers;
self.rebuild_peers_at = Instant::now() + TIME_BETWEEN_REBUILD_PEERS;
}
// Handle swarm events
event = self.swarm.next() => {
// `Swarm::next` will never return `Poll::Ready(None)`
// https://docs.rs/
// libp2p/0.54.1/libp2p/struct.Swarm.html#impl-Stream-for-Swarm%3CTBehaviour%3E
let event = event.unwrap();
match event {
// New connection, so update peers
SwarmEvent::ConnectionEstablished { peer_id, .. } => {
let Some(networks) =
self.validators.read().await.networks(&peer_id).cloned() else { continue };
let mut peers = self.peers.peers.write().await;
for network in networks {
peers.entry(network).or_insert_with(HashSet::new).insert(peer_id);
}
}
// Connection closed, so update peers
SwarmEvent::ConnectionClosed { peer_id, .. } => {
let Some(networks) =
self.validators.read().await.networks(&peer_id).cloned() else { continue };
let mut peers = self.peers.peers.write().await;
for network in networks {
peers.entry(network).or_insert_with(HashSet::new).remove(&peer_id);
}
/*
We want to re-run the dial task, since we lost a peer, in case we should find new
peers. This opens a DoS where a validator repeatedly opens/closes connections to
force iterations of the dial task. We prevent this by setting a minimum distance
since the last explicit iteration.
This is suboptimal. If we have several disconnects in immediate proximity, we'll
trigger the dial task upon the first (where we may still have enough peers we
shouldn't dial more) but not the last (where we may have so few peers left we
should dial more). This is accepted as the dial task will eventually run on its
natural timer.
*/
const MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL: Duration = Duration::from_secs(60);
let now = Instant::now();
if (self.last_dial_task_run + MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL) < now {
self.dial_task.run_now();
self.last_dial_task_run = now;
}
}
SwarmEvent::Behaviour(event) => {
match event {
BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event) => {
// This *is* an exhaustive match as these events are empty enums
match event {}
}
BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) => {
if result.is_err() {
self.swarm.close_connection(connection);
}
}
BehaviorEvent::Reqres(event) => self.handle_reqres(event),
BehaviorEvent::Gossip(event) => self.handle_gossip(event),
}
}
// We don't handle any of these
SwarmEvent::IncomingConnection { .. } |
SwarmEvent::IncomingConnectionError { .. } |
SwarmEvent::OutgoingConnectionError { .. } |
SwarmEvent::NewListenAddr { .. } |
SwarmEvent::ExpiredListenAddr { .. } |
SwarmEvent::ListenerClosed { .. } |
SwarmEvent::ListenerError { .. } |
SwarmEvent::Dialing { .. } |
SwarmEvent::NewExternalAddrCandidate { .. } |
SwarmEvent::ExternalAddrConfirmed { .. } |
SwarmEvent::ExternalAddrExpired { .. } |
SwarmEvent::NewExternalAddrOfPeer { .. } => {}
// Requires as SwarmEvent is non-exhaustive
_ => log::warn!("unhandled SwarmEvent: {event:?}"),
}
}
message = self.gossip.recv() => {
let message = message.expect("channel for messages to gossip was closed?");
let topic = message.topic();
let message = borsh::to_vec(&message).unwrap();
/*
If we're sending a message for this topic, it's because this topic is relevant to us.
Subscribe to it.
We create topics roughly weekly, one per validator set/session. Once present in a
topic, we're interested in all messages for it until the validator set/session retires.
Then there should no longer be any messages for the topic as we should drop the
Tributary which creates the messages.
We use this as an argument to not bother implement unsubscribing from topics. They're
incredibly infrequently created and old topics shouldn't still have messages published
to them. Having the coordinator reboot being our method of unsubscribing is fine.
Alternatively, we could route an API to determine when a topic is retired, or retire
any topics we haven't sent messages on in the past hour.
*/
let behavior = self.swarm.behaviour_mut();
let _: Result<_, _> = behavior.gossip.subscribe(&topic);
/*
This may be an error of `InsufficientPeers`. If so, we could ask DialTask to dial more
peers for this network. We don't as we assume DialTask will detect the lack of peers
for this network, and will already successfully handle this.
*/
let _: Result<_, _> = behavior.gossip.publish(topic.hash(), message);
}
request = self.outbound_requests.recv() => {
let (peer, request, response_channel) =
request.expect("channel for requests was closed?");
let request_id = self.swarm.behaviour_mut().reqres.send_request(&peer, request);
self.outbound_request_responses.insert(request_id, response_channel);
}
response = self.inbound_request_responses.recv() => {
let (request_id, response) =
response.expect("channel for inbound request responses was closed?");
if let Some(channel) = self.inbound_request_response_channels.remove(&request_id) {
let _: Result<_, _> =
self.swarm.behaviour_mut().reqres.send_response(channel, response);
}
}
}
}
}
#[allow(clippy::too_many_arguments)]
pub(crate) fn spawn(
dial_task: TaskHandle,
to_dial: mpsc::UnboundedReceiver<DialOpts>,
validators: Arc<RwLock<Validators>>,
validator_changes: mpsc::UnboundedReceiver<validators::Changes>,
peers: Peers,
swarm: Swarm<Behavior>,
gossip: mpsc::UnboundedReceiver<gossip::Message>,
signed_cosigns: mpsc::UnboundedSender<SignedCosign>,
tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec<u8>)>,
outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender<Response>)>,
heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ExternalValidatorSet, [u8; 32])>,
notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>,
inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>,
) {
tokio::spawn(
SwarmTask {
dial_task,
to_dial,
last_dial_task_run: Instant::now(),
validators,
validator_changes,
peers,
rebuild_peers_at: Instant::now() + TIME_BETWEEN_REBUILD_PEERS,
swarm,
gossip,
signed_cosigns,
tributary_gossip,
outbound_requests,
outbound_request_responses: HashMap::new(),
inbound_request_response_channels: HashMap::new(),
heartbeat_requests,
notable_cosign_requests,
inbound_request_responses,
}
.run(),
);
}
}

View File

@@ -0,0 +1,224 @@
use core::{borrow::Borrow, future::Future};
use std::{
sync::Arc,
collections::{HashSet, HashMap},
};
use serai_client_serai::abi::primitives::{network_id::ExternalNetworkId, validator_sets::Session};
use serai_client_serai::{RpcError, Serai};
use serai_task::{Task, ContinuallyRan};
use libp2p::PeerId;
use futures_util::stream::{StreamExt, FuturesUnordered};
use tokio::sync::{mpsc, RwLock};
use crate::peer_id_from_public;
pub(crate) struct Changes {
pub(crate) removed: HashSet<PeerId>,
pub(crate) added: HashSet<PeerId>,
}
pub(crate) struct Validators {
serai: Arc<Serai>,
// A cache for which session we're populated with the validators of
sessions: HashMap<ExternalNetworkId, Session>,
// The validators by network
by_network: HashMap<ExternalNetworkId, HashSet<PeerId>>,
// The validators and their networks
validators: HashMap<PeerId, HashSet<ExternalNetworkId>>,
// The channel to send the changes down
changes: mpsc::UnboundedSender<Changes>,
}
impl Validators {
pub(crate) fn new(serai: Arc<Serai>) -> (Self, mpsc::UnboundedReceiver<Changes>) {
let (send, recv) = mpsc::unbounded_channel();
let validators = Validators {
serai,
sessions: HashMap::new(),
by_network: HashMap::new(),
validators: HashMap::new(),
changes: send,
};
(validators, recv)
}
async fn session_changes(
serai: impl Borrow<Serai>,
sessions: impl Borrow<HashMap<ExternalNetworkId, Session>>,
) -> Result<Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>, RpcError> {
/*
This uses the latest finalized block, not the latest cosigned block, which should be fine as
in the worst case, we'd connect to unexpected validators. They still shouldn't be able to
bypass the cosign protocol unless a historical global session was malicious, in which case
the cosign protocol already breaks.
Besides, we can't connect to historical validators, only the current validators.
*/
let serai = serai.borrow().state().await?;
let mut session_changes = vec![];
{
// FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but
// we poll it till it yields all futures with the most minimal processing possible
let mut futures = FuturesUnordered::new();
for network in ExternalNetworkId::all() {
let sessions = sessions.borrow();
let serai = serai.borrow();
futures.push(async move {
let session = match serai.current_session(network.into()).await {
Ok(Some(session)) => session,
Ok(None) => return Ok(None),
Err(e) => return Err(e),
};
if sessions.get(&network) == Some(&session) {
Ok(None)
} else {
match serai.current_validators(network.into()).await {
Ok(Some(validators)) => Ok(Some((
network,
session,
validators
.into_iter()
.map(|validator| peer_id_from_public(validator.into()))
.collect(),
))),
Ok(None) => panic!("network has session yet no validators"),
Err(e) => Err(e),
}
}
});
}
while let Some(session_change) = futures.next().await {
if let Some(session_change) = session_change? {
session_changes.push(session_change);
}
}
}
Ok(session_changes)
}
fn incorporate_session_changes(
&mut self,
session_changes: Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>,
) {
let mut removed = HashSet::new();
let mut added = HashSet::new();
for (network, session, validators) in session_changes {
// Remove the existing validators
for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) {
// Get all networks this validator is in
let mut networks = self.validators.remove(&validator).unwrap();
// Remove this one
networks.remove(&network);
if !networks.is_empty() {
// Insert the networks back if the validator was present in other networks
self.validators.insert(validator, networks);
} else {
// Because this validator is no longer present in any network, mark them as removed
/*
This isn't accurate. The validator isn't present in the latest session for this
network. The validator was present in the prior session which has yet to retire. Our
lack of explicit inclusion for both the prior session and the current session causes
only the validators mutually present in both sessions to be responsible for all actions
still ongoing as the prior validator set retires.
TODO: Fix this
*/
removed.insert(validator);
}
}
// Add the new validators
for validator in validators.iter().copied() {
self.validators.entry(validator).or_insert_with(HashSet::new).insert(network);
added.insert(validator);
}
self.by_network.insert(network, validators);
// Update the session we have populated
self.sessions.insert(network, session);
}
// Only flag validators for removal if they weren't simultaneously added by these changes
removed.retain(|validator| !added.contains(validator));
// Send the changes, dropping the error
// This lets the caller opt-out of change notifications by dropping the receiver
let _: Result<_, _> = self.changes.send(Changes { removed, added });
}
/// Update the view of the validators.
pub(crate) async fn update(&mut self) -> Result<(), RpcError> {
let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?;
self.incorporate_session_changes(session_changes);
Ok(())
}
pub(crate) fn by_network(&self) -> &HashMap<ExternalNetworkId, HashSet<PeerId>> {
&self.by_network
}
pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet<ExternalNetworkId>> {
self.validators.get(peer_id)
}
}
/// A task which updates a set of validators.
///
/// The validators managed by this tak will have their exclusive lock held for a minimal amount of
/// time while the update occurs to minimize the disruption to the services relying on it.
pub(crate) struct UpdateValidatorsTask {
validators: Arc<RwLock<Validators>>,
}
impl UpdateValidatorsTask {
/// Spawn a new instance of the UpdateValidatorsTask.
///
/// This returns a reference to the Validators it updates after spawning itself.
pub(crate) fn spawn(
serai: Arc<Serai>,
) -> (Arc<RwLock<Validators>>, mpsc::UnboundedReceiver<Changes>) {
// The validators which will be updated
let (validators, changes) = Validators::new(serai);
let validators = Arc::new(RwLock::new(validators));
// Define the task
let (update_validators_task, update_validators_task_handle) = Task::new();
// Forget the handle, as dropping the handle would stop the task
core::mem::forget(update_validators_task_handle);
// Spawn the task
tokio::spawn(
(Self { validators: validators.clone() }).continually_run(update_validators_task, vec![]),
);
// Return the validators
(validators, changes)
}
}
impl ContinuallyRan for UpdateValidatorsTask {
// Only run every minute, not the default of every five seconds
const DELAY_BETWEEN_ITERATIONS: u64 = 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let session_changes = {
let validators = self.validators.read().await;
Validators::session_changes(validators.serai.clone(), validators.sessions.clone()).await?
};
self.validators.write().await.incorporate_session_changes(session_changes);
Ok(true)
}
}
}

View File

@@ -0,0 +1,151 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_primitives::validator_sets::{ExternalValidatorSet, KeyShares};
use futures_lite::FutureExt;
use tributary_sdk::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader};
use serai_db::*;
use serai_task::ContinuallyRan;
use crate::{Heartbeat, Peer, P2p};
// Amount of blocks in a minute
const BLOCKS_PER_MINUTE: usize =
(60 / (tributary_sdk::tendermint::TARGET_BLOCK_TIME / 1000)) as usize;
/// The minimum amount of blocks to include/included within a batch, assuming there's blocks to
/// include in the batch.
///
/// This decides the size limit of the Batch (the Block size limit multiplied by the minimum amount
/// of blocks we'll send). The actual amount of blocks sent will be the amount which fits within
/// the size limit.
pub const MIN_BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1;
/// The size limit for a batch of blocks sent in response to a Heartbeat.
///
/// This estimates the size of a commit as `32 + (MAX_VALIDATORS * 128)`. At the time of writing, a
/// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators,
/// and aggregate signature). Accordingly, this should be a safe over-estimate.
pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH *
(tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((KeyShares::MAX_PER_SET as usize) * 128));
/// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's
/// tip.
///
/// If the other validator has more blocks then we do, they're expected to inform us. This forms
/// the sync protocol for our Tributaries.
pub(crate) struct HeartbeatTask<TD: Db, Tx: TransactionTrait, P: P2p> {
pub(crate) set: ExternalValidatorSet,
pub(crate) tributary: Tributary<TD, Tx, P>,
pub(crate) reader: TributaryReader<TD, Tx>,
pub(crate) p2p: P,
}
impl<TD: Db, Tx: TransactionTrait, P: P2p> ContinuallyRan for HeartbeatTask<TD, Tx, P> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
// If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol
const TIME_TO_TRIGGER_SYNCING: Duration = Duration::from_secs(60);
let mut tip = self.reader.tip();
let time_since = {
let block_time = if let Some(time_of_block) = self.reader.time_of_block(&tip) {
SystemTime::UNIX_EPOCH + Duration::from_secs(time_of_block)
} else {
// If we couldn't fetch this block's time, assume it's old
// We don't want to declare its unix time as 0 and claim it's 50+ years old though
log::warn!(
"heartbeat task couldn't fetch the time of a block, flagging it as a minute old"
);
SystemTime::now() - TIME_TO_TRIGGER_SYNCING
};
SystemTime::now().duration_since(block_time).unwrap_or(Duration::ZERO)
};
let mut tip_is_stale = false;
let mut synced_block = false;
if TIME_TO_TRIGGER_SYNCING <= time_since {
log::warn!(
"last known tributary block for {:?} was {} seconds ago",
self.set,
time_since.as_secs()
);
// This requests all peers for this network, without differentiating by session
// This should be fine as most validators should overlap across sessions
'peer: for peer in self.p2p.peers(self.set.network).await {
loop {
// Create the request for blocks
if tip_is_stale {
tip = self.reader.tip();
tip_is_stale = false;
}
// Necessary due to https://github.com/rust-lang/rust/issues/100013
let Some(blocks) = peer
.send_heartbeat(Heartbeat { set: self.set, latest_block_hash: tip })
.boxed()
.await
else {
continue 'peer;
};
// This is the final batch if it has less than the maximum amount of blocks
// (signifying there weren't more blocks after this to fill the batch with)
let final_batch = blocks.len() < MIN_BLOCKS_PER_BATCH;
// Sync each block
for block_with_commit in blocks {
let Ok(block) = Block::read(&mut block_with_commit.block.as_slice()) else {
// TODO: Disconnect/slash this peer
log::warn!("received invalid Block inside response to heartbeat");
continue 'peer;
};
// Attempt to sync the block
if !self.tributary.sync_block(block, block_with_commit.commit).await {
// The block may be invalid or stale if we added a block elsewhere
if (!tip_is_stale) && (tip != self.reader.tip()) {
// Since the Tributary's tip advanced on its own, return
return Ok(false);
}
// Since this block was invalid or stale in a way non-trivial to detect, try to
// sync with the next peer
continue 'peer;
}
// Because we synced a block, flag the tip as stale
tip_is_stale = true;
// And that we did sync a block
synced_block = true;
}
// If this was the final batch, move on from this peer
// We could assume they were honest and we are done syncing the chain, but this is a
// bit more robust
if final_batch {
continue 'peer;
}
}
}
// This will cause the tak to be run less and less often, ensuring we aren't spamming the
// net if we legitimately aren't making progress
if !synced_block {
Err(format!(
"tried to sync blocks for {:?} since we haven't seen one in {} seconds but didn't",
self.set,
time_since.as_secs(),
))?;
}
}
Ok(synced_block)
}
}
}

204
coordinator/p2p/src/lib.rs Normal file
View File

@@ -0,0 +1,204 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::future::Future;
use std::collections::HashMap;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet};
use serai_db::Db;
use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader};
use serai_cosign::{SignedCosign, Cosigning};
use tokio::sync::{mpsc, oneshot};
use serai_task::{Task, ContinuallyRan};
/// The heartbeat task, effecting sync of Tributaries
pub mod heartbeat;
use crate::heartbeat::HeartbeatTask;
/// A heartbeat for a Tributary.
#[derive(Clone, Copy, BorshSerialize, BorshDeserialize, Debug)]
pub struct Heartbeat {
/// The Tributary this is the heartbeat of.
pub set: ExternalValidatorSet,
/// The hash of the latest block added to the Tributary.
pub latest_block_hash: [u8; 32],
}
/// A tributary block and its commit.
#[derive(Clone, BorshSerialize, BorshDeserialize)]
pub struct TributaryBlockWithCommit {
/// The serialized block.
pub block: Vec<u8>,
/// The serialized commit.
pub commit: Vec<u8>,
}
/// A representation of a peer.
pub trait Peer<'a>: Send {
/// Send a heartbeat to this peer.
fn send_heartbeat(
&self,
heartbeat: Heartbeat,
) -> impl Send + Future<Output = Option<Vec<TributaryBlockWithCommit>>>;
}
/// The representation of the P2P network.
pub trait P2p:
Send + Sync + Clone + tributary_sdk::P2p + serai_cosign::RequestNotableCosigns
{
/// The representation of a peer.
type Peer<'a>: Peer<'a>;
/// Fetch the peers for this network.
fn peers(&self, network: ExternalNetworkId) -> impl Send + Future<Output = Vec<Self::Peer<'_>>>;
/// Broadcast a cosign.
fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future<Output = ()>;
/// A cancel-safe future for the next heartbeat received over the P2P network.
///
/// Yields the validator set its for, the latest block hash observed, and a channel to return the
/// descending blocks. This channel MUST NOT and will not have its receiver dropped before a
/// message is sent.
fn heartbeat(
&self,
) -> impl Send + Future<Output = (Heartbeat, oneshot::Sender<Vec<TributaryBlockWithCommit>>)>;
/// A cancel-safe future for the next request for the notable cosigns of a gloabl session.
///
/// Yields the global session the request is for and a channel to return the notable cosigns.
/// This channel MUST NOT and will not have its receiver dropped before a message is sent.
fn notable_cosigns_request(
&self,
) -> impl Send + Future<Output = ([u8; 32], oneshot::Sender<Vec<SignedCosign>>)>;
/// A cancel-safe future for the next message regarding a Tributary.
///
/// Yields the message's Tributary's genesis block hash and the message.
fn tributary_message(&self) -> impl Send + Future<Output = ([u8; 32], Vec<u8>)>;
/// A cancel-safe future for the next cosign received.
fn cosign(&self) -> impl Send + Future<Output = SignedCosign>;
}
fn handle_notable_cosigns_request<D: Db>(
db: &D,
global_session: [u8; 32],
channel: oneshot::Sender<Vec<SignedCosign>>,
) {
let cosigns = Cosigning::<D>::notable_cosigns(db, global_session);
channel.send(cosigns).expect("channel listening for cosign oneshot response was dropped?");
}
fn handle_heartbeat<D: Db, T: TransactionTrait>(
reader: &TributaryReader<D, T>,
mut latest_block_hash: [u8; 32],
channel: oneshot::Sender<Vec<TributaryBlockWithCommit>>,
) {
let mut res_size = 8;
let mut res = vec![];
// This former case should be covered by this latter case
while (res.len() < heartbeat::MIN_BLOCKS_PER_BATCH) || (res_size < heartbeat::BATCH_SIZE_LIMIT) {
let Some(block_after) = reader.block_after(&latest_block_hash) else { break };
// These `break` conditions should only occur under edge cases, such as if we're actively
// deleting this Tributary due to being done with it
let Some(block) = reader.block(&block_after) else { break };
let block = block.serialize();
let Some(commit) = reader.commit(&block_after) else { break };
res_size += 8 + block.len() + 8 + commit.len();
res.push(TributaryBlockWithCommit { block, commit });
latest_block_hash = block_after;
}
channel
.send(res)
.map_err(|_| ())
.expect("channel listening for heartbeat oneshot response was dropped?");
}
/// Run the P2P instance.
///
/// `add_tributary`'s and `retire_tributary's senders, along with `send_cosigns`'s receiver, must
/// never be dropped. `retire_tributary` is not required to only be instructed with added
/// Tributaries.
pub async fn run<TD: Db, Tx: TransactionTrait, P: P2p>(
db: impl Db,
p2p: P,
mut add_tributary: mpsc::UnboundedReceiver<(ExternalValidatorSet, Tributary<TD, Tx, P>)>,
mut retire_tributary: mpsc::UnboundedReceiver<ExternalValidatorSet>,
send_cosigns: mpsc::UnboundedSender<SignedCosign>,
) {
let mut readers = HashMap::<ExternalValidatorSet, TributaryReader<TD, Tx>>::new();
let mut tributaries = HashMap::<[u8; 32], mpsc::UnboundedSender<Vec<u8>>>::new();
let mut heartbeat_tasks = HashMap::<ExternalValidatorSet, _>::new();
loop {
tokio::select! {
tributary = add_tributary.recv() => {
let (set, tributary) = tributary.expect("add_tributary send was dropped");
let reader = tributary.reader();
readers.insert(set, reader.clone());
let (heartbeat_task_def, heartbeat_task) = Task::new();
tokio::spawn(
(HeartbeatTask {
set,
tributary: tributary.clone(),
reader: reader.clone(),
p2p: p2p.clone(),
}).continually_run(heartbeat_task_def, vec![])
);
heartbeat_tasks.insert(set, heartbeat_task);
let (tributary_message_send, mut tributary_message_recv) = mpsc::unbounded_channel();
tributaries.insert(tributary.genesis(), tributary_message_send);
// For as long as this sender exists, handle the messages from it on a dedicated task
tokio::spawn(async move {
while let Some(message) = tributary_message_recv.recv().await {
tributary.handle_message(&message).await;
}
});
}
set = retire_tributary.recv() => {
let set = set.expect("retire_tributary send was dropped");
let Some(reader) = readers.remove(&set) else { continue };
tributaries.remove(&reader.genesis()).expect("tributary reader but no tributary");
heartbeat_tasks.remove(&set).expect("tributary but no heartbeat task");
}
(heartbeat, channel) = p2p.heartbeat() => {
if let Some(reader) = readers.get(&heartbeat.set) {
let reader = reader.clone(); // This is a cheap clone
// We spawn this on a task due to the DB reads needed
tokio::spawn(async move {
handle_heartbeat(&reader, heartbeat.latest_block_hash, channel)
});
}
}
(global_session, channel) = p2p.notable_cosigns_request() => {
tokio::spawn({
let db = db.clone();
async move { handle_notable_cosigns_request(&db, global_session, channel) }
});
}
(tributary, message) = p2p.tributary_message() => {
if let Some(tributary) = tributaries.get(&tributary) {
tributary.send(message).expect("tributary message recv was dropped?");
}
}
cosign = p2p.cosign() => {
// We don't call `Cosigning::intake_cosign` here as that can only be called from a single
// location. We also need to intake the cosigns we produce, which means we need to merge
// these streams (signing, network) somehow. That's done with this mpsc channel
send_cosigns.send(cosign).expect("channel receiving cosigns was dropped");
}
}
}
}

View File

@@ -1,333 +0,0 @@
use core::time::Duration;
use std::{
sync::Arc,
collections::{HashSet, HashMap},
};
use tokio::{
sync::{mpsc, Mutex, RwLock},
time::sleep,
};
use borsh::BorshSerialize;
use sp_application_crypto::RuntimePublic;
use serai_client::{
primitives::{ExternalNetworkId, EXTERNAL_NETWORKS},
validator_sets::primitives::{ExternalValidatorSet, Session},
Serai, SeraiError, TemporalSerai,
};
use serai_db::{Get, DbTxn, Db, create_db};
use processor_messages::coordinator::cosign_block_msg;
use crate::{
p2p::{CosignedBlock, GossipMessageKind, P2p},
substrate::LatestCosignedBlock,
};
create_db! {
CosignDb {
ReceivedCosign: (set: ExternalValidatorSet, block: [u8; 32]) -> CosignedBlock,
LatestCosign: (network: ExternalNetworkId) -> CosignedBlock,
DistinctChain: (set: ExternalValidatorSet) -> (),
}
}
pub struct CosignEvaluator<D: Db> {
db: Mutex<D>,
serai: Arc<Serai>,
stakes: RwLock<Option<HashMap<ExternalNetworkId, u64>>>,
latest_cosigns: RwLock<HashMap<ExternalNetworkId, CosignedBlock>>,
}
impl<D: Db> CosignEvaluator<D> {
async fn update_latest_cosign(&self) {
let stakes_lock = self.stakes.read().await;
// If we haven't gotten the stake data yet, return
let Some(stakes) = stakes_lock.as_ref() else { return };
let total_stake = stakes.values().copied().sum::<u64>();
let latest_cosigns = self.latest_cosigns.read().await;
let mut highest_block = 0;
for cosign in latest_cosigns.values() {
let mut networks = HashSet::new();
for (network, sub_cosign) in &*latest_cosigns {
if sub_cosign.block_number >= cosign.block_number {
networks.insert(network);
}
}
let sum_stake =
networks.into_iter().map(|network| stakes.get(network).unwrap_or(&0)).sum::<u64>();
let needed_stake = ((total_stake * 2) / 3) + 1;
if (total_stake == 0) || (sum_stake > needed_stake) {
highest_block = highest_block.max(cosign.block_number);
}
}
let mut db_lock = self.db.lock().await;
let mut txn = db_lock.txn();
if highest_block > LatestCosignedBlock::latest_cosigned_block(&txn) {
log::info!("setting latest cosigned block to {}", highest_block);
LatestCosignedBlock::set(&mut txn, &highest_block);
}
txn.commit();
}
async fn update_stakes(&self) -> Result<(), SeraiError> {
let serai = self.serai.as_of_latest_finalized_block().await?;
let mut stakes = HashMap::new();
for network in EXTERNAL_NETWORKS {
// Use if this network has published a Batch for a short-circuit of if they've ever set a key
let set_key = serai.in_instructions().last_batch_for_network(network).await?.is_some();
if set_key {
stakes.insert(
network,
serai
.validator_sets()
.total_allocated_stake(network.into())
.await?
.expect("network which published a batch didn't have a stake set")
.0,
);
}
}
// Since we've successfully built stakes, set it
*self.stakes.write().await = Some(stakes);
self.update_latest_cosign().await;
Ok(())
}
// Uses Err to signify a message should be retried
async fn handle_new_cosign(&self, cosign: CosignedBlock) -> Result<(), SeraiError> {
// If we already have this cosign or a newer cosign, return
if let Some(latest) = self.latest_cosigns.read().await.get(&cosign.network) {
if latest.block_number >= cosign.block_number {
return Ok(());
}
}
// If this an old cosign (older than a day), drop it
let latest_block = self.serai.latest_finalized_block().await?;
if (cosign.block_number + (24 * 60 * 60 / 6)) < latest_block.number() {
log::debug!("received old cosign supposedly signed by {:?}", cosign.network);
return Ok(());
}
let Some(block) = self.serai.finalized_block_by_number(cosign.block_number).await? else {
log::warn!("received cosign with a block number which doesn't map to a block");
return Ok(());
};
async fn set_with_keys_fn(
serai: &TemporalSerai<'_>,
network: ExternalNetworkId,
) -> Result<Option<ExternalValidatorSet>, SeraiError> {
let Some(latest_session) = serai.validator_sets().session(network.into()).await? else {
log::warn!("received cosign from {:?}, which doesn't yet have a session", network);
return Ok(None);
};
let prior_session = Session(latest_session.0.saturating_sub(1));
Ok(Some(
if serai
.validator_sets()
.keys(ExternalValidatorSet { network, session: prior_session })
.await?
.is_some()
{
ExternalValidatorSet { network, session: prior_session }
} else {
ExternalValidatorSet { network, session: latest_session }
},
))
}
// Get the key for this network as of the prior block
// If we have two chains, this value may be different across chains depending on if one chain
// included the set_keys and one didn't
// Because set_keys will force a cosign, it will force detection of distinct blocks
// re: set_keys using keys prior to set_keys (assumed amenable to all)
let serai = self.serai.as_of(block.header.parent_hash.into());
let Some(set_with_keys) = set_with_keys_fn(&serai, cosign.network).await? else {
return Ok(());
};
let Some(keys) = serai.validator_sets().keys(set_with_keys).await? else {
log::warn!("received cosign for a block we didn't have keys for");
return Ok(());
};
if !keys
.0
.verify(&cosign_block_msg(cosign.block_number, cosign.block), &cosign.signature.into())
{
log::warn!("received cosigned block with an invalid signature");
return Ok(());
}
log::info!(
"received cosign for block {} ({}) by {:?}",
block.number(),
hex::encode(cosign.block),
cosign.network
);
// Save this cosign to the DB
{
let mut db = self.db.lock().await;
let mut txn = db.txn();
ReceivedCosign::set(&mut txn, set_with_keys, cosign.block, &cosign);
LatestCosign::set(&mut txn, set_with_keys.network, &(cosign));
txn.commit();
}
if cosign.block != block.hash() {
log::error!(
"received cosign for a distinct block at {}. we have {}. cosign had {}",
cosign.block_number,
hex::encode(block.hash()),
hex::encode(cosign.block)
);
let serai = self.serai.as_of(latest_block.hash());
let mut db = self.db.lock().await;
// Save this set as being on a different chain
let mut txn = db.txn();
DistinctChain::set(&mut txn, set_with_keys, &());
txn.commit();
let mut total_stake = 0;
let mut total_on_distinct_chain = 0;
for network in EXTERNAL_NETWORKS {
// Get the current set for this network
let set_with_keys = {
let mut res;
while {
res = set_with_keys_fn(&serai, network).await;
res.is_err()
} {
log::error!(
"couldn't get the set with keys when checking for a distinct chain: {:?}",
res
);
tokio::time::sleep(core::time::Duration::from_secs(3)).await;
}
res.unwrap()
};
// Get its stake
// Doesn't use the stakes inside self to prevent deadlocks re: multi-lock acquisition
if let Some(set_with_keys) = set_with_keys {
let stake = {
let mut res;
while {
res =
serai.validator_sets().total_allocated_stake(set_with_keys.network.into()).await;
res.is_err()
} {
log::error!(
"couldn't get total allocated stake when checking for a distinct chain: {:?}",
res
);
tokio::time::sleep(core::time::Duration::from_secs(3)).await;
}
res.unwrap()
};
if let Some(stake) = stake {
total_stake += stake.0;
if DistinctChain::get(&*db, set_with_keys).is_some() {
total_on_distinct_chain += stake.0;
}
}
}
}
// See https://github.com/serai-dex/serai/issues/339 for the reasoning on 17%
if (total_stake * 17 / 100) <= total_on_distinct_chain {
panic!("17% of validator sets (by stake) have co-signed a distinct chain");
}
} else {
{
let mut latest_cosigns = self.latest_cosigns.write().await;
latest_cosigns.insert(cosign.network, cosign);
}
self.update_latest_cosign().await;
}
Ok(())
}
#[allow(clippy::new_ret_no_self)]
pub fn new<P: P2p>(db: D, p2p: P, serai: Arc<Serai>) -> mpsc::UnboundedSender<CosignedBlock> {
let mut latest_cosigns = HashMap::new();
for network in EXTERNAL_NETWORKS {
if let Some(cosign) = LatestCosign::get(&db, network) {
latest_cosigns.insert(network, cosign);
}
}
let evaluator = Arc::new(Self {
db: Mutex::new(db),
serai,
stakes: RwLock::new(None),
latest_cosigns: RwLock::new(latest_cosigns),
});
// Spawn a task to update stakes regularly
tokio::spawn({
let evaluator = evaluator.clone();
async move {
loop {
// Run this until it passes
while evaluator.update_stakes().await.is_err() {
log::warn!("couldn't update stakes in the cosign evaluator");
// Try again in 10 seconds
sleep(Duration::from_secs(10)).await;
}
// Run it every 10 minutes as we don't need the exact stake data for this to be valid
sleep(Duration::from_secs(10 * 60)).await;
}
}
});
// Spawn a task to receive cosigns and handle them
let (send, mut recv) = mpsc::unbounded_channel();
tokio::spawn({
let evaluator = evaluator.clone();
async move {
while let Some(msg) = recv.recv().await {
while evaluator.handle_new_cosign(msg).await.is_err() {
// Try again in 10 seconds
sleep(Duration::from_secs(10)).await;
}
}
}
});
// Spawn a task to rebroadcast the most recent cosigns
tokio::spawn({
async move {
loop {
let cosigns = evaluator.latest_cosigns.read().await.values().copied().collect::<Vec<_>>();
for cosign in cosigns {
let mut buf = vec![];
cosign.serialize(&mut buf).unwrap();
P2p::broadcast(&p2p, GossipMessageKind::CosignedBlock, buf).await;
}
sleep(Duration::from_secs(60)).await;
}
}
});
// Return the channel to send cosigns
send
}
}

View File

@@ -1,134 +1,150 @@
use blake2::{ use std::{path::Path, fs};
digest::{consts::U32, Digest},
Blake2b, pub(crate) use serai_db::{Get, DbTxn, Db as DbTrait};
use serai_db::{create_db, db_channel};
use dkg::Participant;
use serai_client_serai::abi::primitives::{
crypto::KeyPair,
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
}; };
use scale::Encode; use serai_cosign::SignedCosign;
use borsh::{BorshSerialize, BorshDeserialize}; use serai_coordinator_substrate::NewSetInformation;
use serai_client::{ use serai_coordinator_tributary::Transaction;
in_instructions::primitives::{Batch, SignedBatch},
primitives::ExternalNetworkId,
validator_sets::primitives::{ExternalValidatorSet, Session},
};
pub use serai_db::*; #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))]
pub(crate) type Db = std::sync::Arc<serai_db::ParityDb>;
#[cfg(feature = "rocksdb")]
pub(crate) type Db = serai_db::RocksDB;
use ::tributary::ReadWrite; #[allow(unused_variables, unreachable_code)]
use crate::tributary::{TributarySpec, Transaction, scanner::RecognizedIdType}; fn db(path: &str) -> Db {
{
create_db!( let path: &Path = path.as_ref();
MainDb { // This may error if this path already exists, which we shouldn't propagate/panic on. If this
HandledMessageDb: (network: ExternalNetworkId) -> u64, // is a problem (such as we don't have the necessary permissions to write to this path), we
ActiveTributaryDb: () -> Vec<u8>, // expect the following DB opening to error.
RetiredTributaryDb: (set: ExternalValidatorSet) -> (), let _: Result<_, _> = fs::create_dir_all(path.parent().unwrap());
FirstPreprocessDb: (
network: ExternalNetworkId,
id_type: RecognizedIdType,
id: &[u8]
) -> Vec<Vec<u8>>,
LastReceivedBatchDb: (network: ExternalNetworkId) -> u32,
ExpectedBatchDb: (network: ExternalNetworkId, id: u32) -> [u8; 32],
BatchDb: (network: ExternalNetworkId, id: u32) -> SignedBatch,
LastVerifiedBatchDb: (network: ExternalNetworkId) -> u32,
HandoverBatchDb: (set: ExternalValidatorSet) -> u32,
LookupHandoverBatchDb: (network: ExternalNetworkId, batch: u32) -> Session,
QueuedBatchesDb: (set: ExternalValidatorSet) -> Vec<u8>
}
);
impl ActiveTributaryDb {
pub fn active_tributaries<G: Get>(getter: &G) -> (Vec<u8>, Vec<TributarySpec>) {
let bytes = Self::get(getter).unwrap_or_default();
let mut bytes_ref: &[u8] = bytes.as_ref();
let mut tributaries = vec![];
while !bytes_ref.is_empty() {
tributaries.push(TributarySpec::deserialize_reader(&mut bytes_ref).unwrap());
} }
(bytes, tributaries) #[cfg(all(feature = "parity-db", feature = "rocksdb"))]
} panic!("built with parity-db and rocksdb");
#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))]
let db = serai_db::new_parity_db(path);
#[cfg(feature = "rocksdb")]
let db = serai_db::new_rocksdb(path);
db
}
pub fn add_participating_in_tributary(txn: &mut impl DbTxn, spec: &TributarySpec) { pub(crate) fn coordinator_db() -> Db {
let (mut existing_bytes, existing) = ActiveTributaryDb::active_tributaries(txn); let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified");
for tributary in &existing { db(&format!("{root_path}/coordinator/db"))
if tributary == spec { }
return;
}
}
spec.serialize(&mut existing_bytes).unwrap(); fn tributary_db_folder(set: ExternalValidatorSet) -> String {
ActiveTributaryDb::set(txn, &existing_bytes); let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified");
} let network = match set.network {
ExternalNetworkId::Bitcoin => "Bitcoin",
ExternalNetworkId::Ethereum => "Ethereum",
ExternalNetworkId::Monero => "Monero",
};
format!("{root_path}/tributary-{network}-{}", set.session.0)
}
pub fn retire_tributary(txn: &mut impl DbTxn, set: ExternalValidatorSet) { pub(crate) fn tributary_db(set: ExternalValidatorSet) -> Db {
let mut active = Self::active_tributaries(txn).1; db(&format!("{}/db", tributary_db_folder(set)))
for i in 0 .. active.len() { }
if active[i].set() == set {
active.remove(i);
break;
}
}
let mut bytes = vec![]; pub(crate) fn prune_tributary_db(set: ExternalValidatorSet) {
for active in active { log::info!("pruning data directory for tributary {set:?}");
active.serialize(&mut bytes).unwrap(); let db = tributary_db_folder(set);
} if fs::exists(&db).expect("couldn't check if tributary DB exists") {
Self::set(txn, &bytes); fs::remove_dir_all(db).unwrap();
RetiredTributaryDb::set(txn, set, &());
} }
} }
impl FirstPreprocessDb { create_db! {
pub fn save_first_preprocess( Coordinator {
txn: &mut impl DbTxn, // The currently active Tributaries
network: ExternalNetworkId, ActiveTributaries: () -> Vec<NewSetInformation>,
id_type: RecognizedIdType, // The latest Tributary to have been retired for a network
id: &[u8], // Since Tributaries are retired sequentially, this is informative to if any Tributary has been
preprocess: &Vec<Vec<u8>>, // retired
) { RetiredTributary: (network: ExternalNetworkId) -> Session,
if let Some(existing) = FirstPreprocessDb::get(txn, network, id_type, id) { // The last handled message from a Processor
assert_eq!(&existing, preprocess, "saved a distinct first preprocess"); LastProcessorMessage: (network: ExternalNetworkId) -> u64,
return; // Cosigns we produced and tried to intake yet incurred an error while doing so
} ErroneousCosigns: () -> Vec<SignedCosign>,
FirstPreprocessDb::set(txn, network, id_type, id, preprocess); // The keys to confirm and set on the Serai network
KeysToConfirm: (set: ExternalValidatorSet) -> KeyPair,
// The key was set on the Serai network
KeySet: (set: ExternalValidatorSet) -> (),
} }
} }
impl ExpectedBatchDb { db_channel! {
pub fn save_expected_batch(txn: &mut impl DbTxn, batch: &Batch) { Coordinator {
LastReceivedBatchDb::set(txn, batch.network, &batch.id); // Cosigns we produced
Self::set( SignedCosigns: () -> SignedCosign,
txn, // Tributaries to clean up upon reboot
batch.network, TributaryCleanup: () -> ExternalValidatorSet,
batch.id,
&Blake2b::<U32>::digest(batch.instructions.encode()).into(),
);
} }
} }
impl HandoverBatchDb { mod _internal_db {
pub fn set_handover_batch(txn: &mut impl DbTxn, set: ExternalValidatorSet, batch: u32) { use super::*;
Self::set(txn, set, &batch);
LookupHandoverBatchDb::set(txn, set.network, batch, &set.session); db_channel! {
Coordinator {
// Tributary transactions to publish from the Processor messages
TributaryTransactionsFromProcessorMessages: (set: ExternalValidatorSet) -> Transaction,
// Tributary transactions to publish from the DKG confirmation task
TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction,
// Participants to remove
RemoveParticipant: (set: ExternalValidatorSet) -> u16,
}
} }
} }
impl QueuedBatchesDb {
pub fn queue(txn: &mut impl DbTxn, set: ExternalValidatorSet, batch: &Transaction) {
let mut batches = Self::get(txn, set).unwrap_or_default();
batch.write(&mut batches).unwrap();
Self::set(txn, set, &batches);
}
pub fn take(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Vec<Transaction> { pub(crate) struct TributaryTransactionsFromProcessorMessages;
let batches_vec = Self::get(txn, set).unwrap_or_default(); impl TributaryTransactionsFromProcessorMessages {
txn.del(Self::key(set)); pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, tx: &Transaction) {
// If this set has yet to be retired, send this transaction
let mut batches: &[u8] = &batches_vec; if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
let mut res = vec![]; _internal_db::TributaryTransactionsFromProcessorMessages::send(txn, set, tx);
while !batches.is_empty() {
res.push(Transaction::read(&mut batches).unwrap());
} }
res }
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Transaction> {
_internal_db::TributaryTransactionsFromProcessorMessages::try_recv(txn, set)
}
}
pub(crate) struct TributaryTransactionsFromDkgConfirmation;
impl TributaryTransactionsFromDkgConfirmation {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, tx: &Transaction) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::TributaryTransactionsFromDkgConfirmation::send(txn, set, tx);
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Transaction> {
_internal_db::TributaryTransactionsFromDkgConfirmation::try_recv(txn, set)
}
}
pub(crate) struct RemoveParticipant;
impl RemoveParticipant {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::RemoveParticipant::send(txn, set, &u16::from(participant));
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> {
_internal_db::RemoveParticipant::try_recv(txn, set)
.map(|i| Participant::new(i).expect("sent invalid participant index for removal"))
} }
} }

View File

@@ -0,0 +1,441 @@
use core::{ops::Deref, future::Future};
use std::{boxed::Box, collections::HashMap};
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, *};
use dkg::{Participant, musig};
use frost_schnorrkel::{
frost::{curve::Ristretto, FrostError, sign::*},
Schnorrkel,
};
use serai_db::{DbTxn, Db as DbTrait};
#[rustfmt::skip]
use serai_client_serai::abi::primitives::{validator_sets::ExternalValidatorSet, address::SeraiAddress};
use serai_task::{DoesNotError, ContinuallyRan};
use serai_coordinator_substrate::{NewSetInformation, Keys};
use serai_coordinator_tributary::{Transaction, DkgConfirmationMessages};
use crate::{KeysToConfirm, KeySet, TributaryTransactionsFromDkgConfirmation};
fn schnorrkel() -> Schnorrkel {
Schnorrkel::new(b"substrate") // TODO: Pull the constant for this
}
fn our_i(
set: &NewSetInformation,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
data: &HashMap<Participant, Vec<u8>>,
) -> Participant {
let public = SeraiAddress((Ristretto::generator() * key.deref()).to_bytes());
let mut our_i = None;
for participant in data.keys() {
let validator_index = usize::from(u16::from(*participant) - 1);
let (validator, _weight) = set.validators[validator_index];
if validator == public {
our_i = Some(*participant);
}
}
our_i.unwrap()
}
// Take a HashMap of participations with non-contiguous Participants and convert them to a
// contiguous sequence.
//
// The input data is expected to not include our own data, which also won't be in the output data.
//
// Returns the mapping from the contiguous Participants to the original Participants.
fn make_contiguous<T>(
our_i: Participant,
mut data: HashMap<Participant, Vec<u8>>,
transform: impl Fn(Vec<u8>) -> std::io::Result<T>,
) -> Result<HashMap<Participant, T>, Participant> {
assert!(!data.contains_key(&our_i));
let mut ordered_participants = data.keys().copied().collect::<Vec<_>>();
ordered_participants.sort_by_key(|participant| u16::from(*participant));
let mut our_i = Some(our_i);
let mut contiguous = HashMap::new();
let mut i = 1;
for participant in ordered_participants {
// If this is the first participant after our own index, increment to account for our index
if let Some(our_i_value) = our_i {
if u16::from(participant) > u16::from(our_i_value) {
i += 1;
our_i = None;
}
}
let contiguous_index = Participant::new(i).unwrap();
let data = match transform(data.remove(&participant).unwrap()) {
Ok(data) => data,
Err(_) => Err(participant)?,
};
contiguous.insert(contiguous_index, data);
i += 1;
}
Ok(contiguous)
}
fn handle_frost_error<T>(result: Result<T, FrostError>) -> Result<T, Participant> {
match &result {
Ok(_) => Ok(result.unwrap()),
Err(FrostError::InvalidPreprocess(participant) | FrostError::InvalidShare(participant)) => {
Err(*participant)
}
// All of these should be unreachable
Err(
FrostError::InternalError(_) |
FrostError::InvalidParticipant(_, _) |
FrostError::InvalidSigningSet(_) |
FrostError::InvalidParticipantQuantity(_, _) |
FrostError::DuplicatedParticipant(_) |
FrostError::MissingParticipant(_),
) => {
result.unwrap();
unreachable!("continued execution after unwrapping Result::Err");
}
}
}
#[rustfmt::skip]
enum Signer {
Preprocess { attempt: u32, seed: CachedPreprocess, preprocess: [u8; 64] },
Share {
attempt: u32,
musig_validators: Vec<SeraiAddress>,
share: [u8; 32],
machine: Box<AlgorithmSignatureMachine<Ristretto, Schnorrkel>>,
},
}
/// Performs the DKG Confirmation protocol.
pub(crate) struct ConfirmDkgTask<CD: DbTrait, TD: DbTrait> {
db: CD,
set: NewSetInformation,
tributary_db: TD,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
signer: Option<Signer>,
}
impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
pub(crate) fn new(
db: CD,
set: NewSetInformation,
tributary_db: TD,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
) -> Self {
Self { db, set, tributary_db, key, signer: None }
}
fn slash(db: &mut CD, set: ExternalValidatorSet, validator: SeraiAddress) {
let mut txn = db.txn();
TributaryTransactionsFromDkgConfirmation::send(
&mut txn,
set,
&Transaction::RemoveParticipant { participant: validator, signed: Default::default() },
);
txn.commit();
}
fn preprocess(
db: &mut CD,
set: ExternalValidatorSet,
attempt: u32,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
signer: &mut Option<Signer>,
) {
// Perform the preprocess
let public_key = Ristretto::generator() * key.deref();
let (machine, preprocess) = AlgorithmMachine::new(
schnorrkel(),
// We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet
musig(ExternalValidatorSet::musig_context(&set), key, &[public_key]).unwrap(),
)
.preprocess(&mut OsRng);
// We take the preprocess so we can use it in a distinct machine with the actual Musig
// parameters
let seed = machine.cache();
let mut preprocess_bytes = [0u8; 64];
preprocess_bytes.copy_from_slice(&preprocess.serialize());
let preprocess = preprocess_bytes;
let mut txn = db.txn();
// If this attempt has already been preprocessed for, the Tributary will de-duplicate it
// This may mean the Tributary preprocess is distinct from ours, but we check for that later
TributaryTransactionsFromDkgConfirmation::send(
&mut txn,
set,
&Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed: Default::default() },
);
txn.commit();
*signer = Some(Signer::Preprocess { attempt, seed, preprocess });
}
}
impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
type Error = DoesNotError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
// If we were sent a key to set, create the signer for it
if self.signer.is_none() && KeysToConfirm::get(&self.db, self.set.set).is_some() {
// Create and publish the initial preprocess
Self::preprocess(&mut self.db, self.set.set, 0, self.key.clone(), &mut self.signer);
made_progress = true;
}
// If we have keys to confirm, handle all messages from the tributary
if let Some(key_pair) = KeysToConfirm::get(&self.db, self.set.set) {
// Handle all messages from the Tributary
loop {
let mut tributary_txn = self.tributary_db.txn();
let Some(msg) = DkgConfirmationMessages::try_recv(&mut tributary_txn, self.set.set)
else {
break;
};
match msg {
messages::sign::CoordinatorMessage::Reattempt {
id: messages::sign::SignId { attempt, .. },
} => {
// Create and publish the preprocess for the specified attempt
Self::preprocess(
&mut self.db,
self.set.set,
attempt,
self.key.clone(),
&mut self.signer,
);
}
messages::sign::CoordinatorMessage::Preprocesses {
id: messages::sign::SignId { attempt, .. },
mut preprocesses,
} => {
// Confirm the preprocess we're expected to sign with is the one we locally have
// It may be different if we rebooted and made a second preprocess for this attempt
let Some(Signer::Preprocess { attempt: our_attempt, seed, preprocess }) =
self.signer.take()
else {
// If this message is not expected, commit the txn to drop it and move on
// At some point, we'll get a Reattempt and reset
tributary_txn.commit();
break;
};
// Determine the MuSig key signed with
let musig_validators = {
let mut ordered_participants = preprocesses.keys().copied().collect::<Vec<_>>();
ordered_participants.sort_by_key(|participant| u16::from(*participant));
let mut res = vec![];
for participant in ordered_participants {
let (validator, _weight) =
self.set.validators[usize::from(u16::from(participant) - 1)];
res.push(validator);
}
res
};
let musig_public_keys = musig_validators
.iter()
.map(|key| {
Ristretto::read_G(&mut key.0.as_slice())
.expect("Serai validator had invalid public key")
})
.collect::<Vec<_>>();
let keys = musig(
ExternalValidatorSet::musig_context(&self.set.set),
self.key.clone(),
&musig_public_keys,
)
.unwrap();
// Rebuild the machine
let (machine, preprocess_from_cache) =
AlgorithmSignMachine::from_cache(schnorrkel(), keys, seed);
assert_eq!(preprocess.as_slice(), preprocess_from_cache.serialize().as_slice());
// Ensure this is a consistent signing session
let our_i = our_i(&self.set, &self.key, &preprocesses);
let consistent = (attempt == our_attempt) &&
(preprocesses.remove(&our_i).unwrap().as_slice() == preprocess.as_slice());
if !consistent {
tributary_txn.commit();
break;
}
// Reformat the preprocesses into the expected format for Musig
let preprocesses = match make_contiguous(our_i, preprocesses, |preprocess| {
machine.read_preprocess(&mut preprocess.as_slice())
}) {
Ok(preprocesses) => preprocesses,
// This yields the *original participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
self.set.validators[usize::from(u16::from(participant) - 1)].0,
);
tributary_txn.commit();
break;
}
};
// Calculate our share
let (machine, share) = match handle_frost_error(machine.sign(
preprocesses,
&ExternalValidatorSet::set_keys_message(&self.set.set, &key_pair),
)) {
Ok((machine, share)) => (machine, share),
// This yields the *musig participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
musig_validators[usize::from(u16::from(participant) - 1)],
);
tributary_txn.commit();
break;
}
};
// Send our share
let share = <[u8; 32]>::try_from(share.serialize()).unwrap();
let mut txn = self.db.txn();
TributaryTransactionsFromDkgConfirmation::send(
&mut txn,
self.set.set,
&Transaction::DkgConfirmationShare { attempt, share, signed: Default::default() },
);
txn.commit();
self.signer = Some(Signer::Share {
attempt,
musig_validators,
share,
machine: Box::new(machine),
});
}
messages::sign::CoordinatorMessage::Shares {
id: messages::sign::SignId { attempt, .. },
mut shares,
} => {
let Some(Signer::Share { attempt: our_attempt, musig_validators, share, machine }) =
self.signer.take()
else {
tributary_txn.commit();
break;
};
// Ensure this is a consistent signing session
let our_i = our_i(&self.set, &self.key, &shares);
let consistent = (attempt == our_attempt) &&
(shares.remove(&our_i).unwrap().as_slice() == share.as_slice());
if !consistent {
tributary_txn.commit();
break;
}
// Reformat the shares into the expected format for Musig
let shares = match make_contiguous(our_i, shares, |share| {
machine.read_share(&mut share.as_slice())
}) {
Ok(shares) => shares,
// This yields the *original participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
self.set.validators[usize::from(u16::from(participant) - 1)].0,
);
tributary_txn.commit();
break;
}
};
match handle_frost_error(machine.complete(shares)) {
Ok(signature) => {
// Create the bitvec of the participants
let mut signature_participants;
{
use bitvec::prelude::*;
signature_participants = bitvec![u8, Lsb0; 0; 0];
let mut i = 0;
for (validator, _) in &self.set.validators {
if Some(validator) == musig_validators.get(i) {
signature_participants.push(true);
i += 1;
} else {
signature_participants.push(false);
}
}
}
// This is safe to call multiple times as it'll just change which *valid*
// signature to publish
let mut txn = self.db.txn();
Keys::set(
&mut txn,
self.set.set,
key_pair.clone(),
signature_participants,
signature.into(),
);
txn.commit();
}
// This yields the *musig participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
musig_validators[usize::from(u16::from(participant) - 1)],
);
tributary_txn.commit();
break;
}
}
}
}
// Because we successfully handled this message, note we made proress
made_progress = true;
tributary_txn.commit();
}
}
// Check if the key has been set on Serai
if KeysToConfirm::get(&self.db, self.set.set).is_some() &&
KeySet::get(&self.db, self.set.set).is_some()
{
// Take the keys to confirm so we never instantiate the signer again
let mut txn = self.db.txn();
KeysToConfirm::take(&mut txn, self.set.set);
KeySet::take(&mut txn, self.set.set);
txn.commit();
// Drop our own signer
// The task won't die until the Tributary does, but now it'll never do anything again
self.signer = None;
made_progress = true;
}
Ok(made_progress)
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,46 +0,0 @@
use std::sync::Arc;
use serai_client::primitives::ExternalNetworkId;
use processor_messages::{ProcessorMessage, CoordinatorMessage};
use message_queue::{Service, Metadata, client::MessageQueue};
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Message {
pub id: u64,
pub network: ExternalNetworkId,
pub msg: ProcessorMessage,
}
#[async_trait::async_trait]
pub trait Processors: 'static + Send + Sync + Clone {
async fn send(&self, network: ExternalNetworkId, msg: impl Send + Into<CoordinatorMessage>);
async fn recv(&self, network: ExternalNetworkId) -> Message;
async fn ack(&self, msg: Message);
}
#[async_trait::async_trait]
impl Processors for Arc<MessageQueue> {
async fn send(&self, network: ExternalNetworkId, msg: impl Send + Into<CoordinatorMessage>) {
let msg: CoordinatorMessage = msg.into();
let metadata =
Metadata { from: self.service, to: Service::Processor(network), intent: msg.intent() };
let msg = borsh::to_vec(&msg).unwrap();
self.queue(metadata, msg).await;
}
async fn recv(&self, network: ExternalNetworkId) -> Message {
let msg = self.next(Service::Processor(network)).await;
assert_eq!(msg.from, Service::Processor(network));
let id = msg.id;
// Deserialize it into a ProcessorMessage
let msg: ProcessorMessage =
borsh::from_slice(&msg.msg).expect("message wasn't a borsh-encoded ProcessorMessage");
return Message { id, network, msg };
}
async fn ack(&self, msg: Message) {
MessageQueue::ack(self, Service::Processor(msg.network), msg.id).await
}
}

View File

@@ -0,0 +1,167 @@
use core::future::Future;
use std::sync::Arc;
use zeroize::Zeroizing;
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc;
use serai_db::{DbTxn, Db as DbTrait};
use serai_client_serai::abi::primitives::{
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
};
use message_queue::{Service, Metadata, client::MessageQueue};
use tributary_sdk::Tributary;
use serai_task::ContinuallyRan;
use serai_coordinator_tributary::Transaction;
use serai_coordinator_p2p::P2p;
use crate::{Db, KeySet};
pub(crate) struct SubstrateTask<P: P2p> {
pub(crate) serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>,
pub(crate) db: Db,
pub(crate) message_queue: Arc<MessageQueue>,
pub(crate) p2p: P,
pub(crate) p2p_add_tributary:
mpsc::UnboundedSender<(ExternalValidatorSet, Tributary<Db, Transaction, P>)>,
pub(crate) p2p_retire_tributary: mpsc::UnboundedSender<ExternalValidatorSet>,
}
impl<P: P2p> ContinuallyRan for SubstrateTask<P> {
type Error = String; // TODO
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
// Handle the Canonical events
for network in ExternalNetworkId::all() {
loop {
let mut txn = self.db.txn();
let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network)
else {
break;
};
match msg {
messages::substrate::CoordinatorMessage::SetKeys { session, .. } => {
KeySet::set(&mut txn, ExternalValidatorSet { network, session }, &());
}
messages::substrate::CoordinatorMessage::SlashesReported { session } => {
let prior_retired = crate::db::RetiredTributary::get(&txn, network);
let next_to_be_retired =
prior_retired.map(|session| Session(session.0 + 1)).unwrap_or(Session(0));
assert_eq!(session, next_to_be_retired);
crate::db::RetiredTributary::set(&mut txn, network, &session);
self
.p2p_retire_tributary
.send(ExternalValidatorSet { network, session })
.expect("p2p retire_tributary channel dropped?");
}
messages::substrate::CoordinatorMessage::Block { .. } => {}
}
let msg = messages::CoordinatorMessage::from(msg);
let metadata = Metadata {
from: Service::Coordinator,
to: Service::Processor(network),
intent: msg.intent(),
};
let msg = borsh::to_vec(&msg).unwrap();
self.message_queue.queue(metadata, msg).await?;
txn.commit();
made_progress = true;
}
}
// Handle the NewSet events
loop {
let mut txn = self.db.txn();
let Some(new_set) = serai_coordinator_substrate::NewSet::try_recv(&mut txn) else { break };
if let Some(historic_session) = new_set.set.session.0.checked_sub(2) {
// We should have retired this session if we're here
if crate::db::RetiredTributary::get(&txn, new_set.set.network).map(|session| session.0) <
Some(historic_session)
{
/*
If we haven't, it's because we're processing the NewSet event before the retiry
event from the Canonical event stream. This happens if the Canonical event, and
then the NewSet event, is fired while we're already iterating over NewSet events.
We break, dropping the txn, restoring this NewSet to the database, so we'll only
handle it once a future iteration of this loop handles the retiry event.
*/
break;
}
/*
Queue this historical Tributary for deletion.
We explicitly don't queue this upon Tributary retire, instead here, to give time to
investigate retired Tributaries if questions are raised post-retiry. This gives a
week (the duration of the following session) after the Tributary has been retired to
make a backup of the data directory for any investigations.
*/
crate::db::TributaryCleanup::send(
&mut txn,
&ExternalValidatorSet {
network: new_set.set.network,
session: Session(historic_session),
},
);
}
// Save this Tributary as active to the database
{
let mut active_tributaries =
crate::db::ActiveTributaries::get(&txn).unwrap_or(Vec::with_capacity(1));
active_tributaries.push(new_set.clone());
crate::db::ActiveTributaries::set(&mut txn, &active_tributaries);
}
// Send GenerateKey to the processor
let msg = messages::key_gen::CoordinatorMessage::GenerateKey {
session: new_set.set.session,
threshold: new_set.threshold,
evrf_public_keys: new_set.evrf_public_keys.clone(),
};
let msg = messages::CoordinatorMessage::from(msg);
let metadata = Metadata {
from: Service::Coordinator,
to: Service::Processor(new_set.set.network),
intent: msg.intent(),
};
let msg = borsh::to_vec(&msg).unwrap();
self.message_queue.queue(metadata, msg).await?;
// Commit the transaction for all of this
txn.commit();
// Now spawn the Tributary
// If we reboot after committing the txn, but before this is called, this will be called
// on boot
crate::tributary::spawn_tributary(
self.db.clone(),
self.message_queue.clone(),
self.p2p.clone(),
&self.p2p_add_tributary,
new_set,
self.serai_key.clone(),
)
.await;
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -1,338 +0,0 @@
/*
If:
A) This block has events and it's been at least X blocks since the last cosign or
B) This block doesn't have events but it's been X blocks since a skipped block which did
have events or
C) This block key gens (which changes who the cosigners are)
cosign this block.
This creates both a minimum and maximum delay of X blocks before a block's cosigning begins,
barring key gens which are exceptional. The minimum delay is there to ensure we don't constantly
spawn new protocols every 6 seconds, overwriting the old ones. The maximum delay is there to
ensure any block needing cosigned is consigned within a reasonable amount of time.
*/
use zeroize::Zeroizing;
use dalek_ff_group::Ristretto;
use ciphersuite::Ciphersuite;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
primitives::ExternalNetworkId,
validator_sets::primitives::{ExternalValidatorSet, Session},
Serai, SeraiError,
};
use serai_db::*;
use crate::{Db, substrate::in_set, tributary::SeraiBlockNumber};
// 5 minutes, expressed in blocks
// TODO: Pull a constant for block time
const COSIGN_DISTANCE: u64 = 5 * 60 / 6;
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
enum HasEvents {
KeyGen,
Yes,
No,
}
create_db!(
SubstrateCosignDb {
ScanCosignFrom: () -> u64,
IntendedCosign: () -> (u64, Option<u64>),
BlockHasEventsCache: (block: u64) -> HasEvents,
LatestCosignedBlock: () -> u64,
}
);
impl IntendedCosign {
// Sets the intended to cosign block, clearing the prior value entirely.
pub fn set_intended_cosign(txn: &mut impl DbTxn, intended: u64) {
Self::set(txn, &(intended, None::<u64>));
}
// Sets the cosign skipped since the last intended to cosign block.
pub fn set_skipped_cosign(txn: &mut impl DbTxn, skipped: u64) {
let (intended, prior_skipped) = Self::get(txn).unwrap();
assert!(prior_skipped.is_none());
Self::set(txn, &(intended, Some(skipped)));
}
}
impl LatestCosignedBlock {
pub fn latest_cosigned_block(getter: &impl Get) -> u64 {
Self::get(getter).unwrap_or_default().max(1)
}
}
db_channel! {
SubstrateDbChannels {
CosignTransactions: (network: ExternalNetworkId) -> (Session, u64, [u8; 32]),
}
}
impl CosignTransactions {
// Append a cosign transaction.
pub fn append_cosign(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
number: u64,
hash: [u8; 32],
) {
CosignTransactions::send(txn, set.network, &(set.session, number, hash))
}
}
async fn block_has_events(
txn: &mut impl DbTxn,
serai: &Serai,
block: u64,
) -> Result<HasEvents, SeraiError> {
let cached = BlockHasEventsCache::get(txn, block);
match cached {
None => {
let serai = serai.as_of(
serai
.finalized_block_by_number(block)
.await?
.expect("couldn't get block which should've been finalized")
.hash(),
);
if !serai.validator_sets().key_gen_events().await?.is_empty() {
return Ok(HasEvents::KeyGen);
}
let has_no_events = serai.coins().burn_with_instruction_events().await?.is_empty() &&
serai.in_instructions().batch_events().await?.is_empty() &&
serai.validator_sets().new_set_events().await?.is_empty() &&
serai.validator_sets().set_retired_events().await?.is_empty();
let has_events = if has_no_events { HasEvents::No } else { HasEvents::Yes };
BlockHasEventsCache::set(txn, block, &has_events);
Ok(has_events)
}
Some(code) => Ok(code),
}
}
async fn potentially_cosign_block(
txn: &mut impl DbTxn,
serai: &Serai,
block: u64,
skipped_block: Option<u64>,
window_end_exclusive: u64,
) -> Result<bool, SeraiError> {
// The following code regarding marking cosigned if prior block is cosigned expects this block to
// not be zero
// While we could perform this check there, there's no reason not to optimize the entire function
// as such
if block == 0 {
return Ok(false);
}
let block_has_events = block_has_events(txn, serai, block).await?;
// If this block had no events and immediately follows a cosigned block, mark it as cosigned
if (block_has_events == HasEvents::No) &&
(LatestCosignedBlock::latest_cosigned_block(txn) == (block - 1))
{
log::debug!("automatically co-signing next block ({block}) since it has no events");
LatestCosignedBlock::set(txn, &block);
}
// If we skipped a block, we're supposed to sign it plus the COSIGN_DISTANCE if no other blocks
// trigger a cosigning protocol covering it
// This means there will be the maximum delay allowed from a block needing cosigning occurring
// and a cosign for it triggering
let maximally_latent_cosign_block =
skipped_block.map(|skipped_block| skipped_block + COSIGN_DISTANCE);
// If this block is within the window,
if block < window_end_exclusive {
// and set a key, cosign it
if block_has_events == HasEvents::KeyGen {
IntendedCosign::set_intended_cosign(txn, block);
// Carry skipped if it isn't included by cosigning this block
if let Some(skipped) = skipped_block {
if skipped > block {
IntendedCosign::set_skipped_cosign(txn, block);
}
}
return Ok(true);
}
} else if (Some(block) == maximally_latent_cosign_block) || (block_has_events != HasEvents::No) {
// Since this block was outside the window and had events/was maximally latent, cosign it
IntendedCosign::set_intended_cosign(txn, block);
return Ok(true);
}
Ok(false)
}
/*
Advances the cosign protocol as should be done per the latest block.
A block is considered cosigned if:
A) It was cosigned
B) It's the parent of a cosigned block
C) It immediately follows a cosigned block and has no events requiring cosigning
This only actually performs advancement within a limited bound (generally until it finds a block
which should be cosigned). Accordingly, it is necessary to call multiple times even if
`latest_number` doesn't change.
*/
async fn advance_cosign_protocol_inner(
db: &mut impl Db,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
serai: &Serai,
latest_number: u64,
) -> Result<(), SeraiError> {
let mut txn = db.txn();
const INITIAL_INTENDED_COSIGN: u64 = 1;
let (last_intended_to_cosign_block, mut skipped_block) = {
let intended_cosign = IntendedCosign::get(&txn);
// If we haven't prior intended to cosign a block, set the intended cosign to 1
if let Some(intended_cosign) = intended_cosign {
intended_cosign
} else {
IntendedCosign::set_intended_cosign(&mut txn, INITIAL_INTENDED_COSIGN);
IntendedCosign::get(&txn).unwrap()
}
};
// "windows" refers to the window of blocks where even if there's a block which should be
// cosigned, it won't be due to proximity due to the prior cosign
let mut window_end_exclusive = last_intended_to_cosign_block + COSIGN_DISTANCE;
// If we've never triggered a cosign, don't skip any cosigns based on proximity
if last_intended_to_cosign_block == INITIAL_INTENDED_COSIGN {
window_end_exclusive = 1;
}
// The consensus rules for this are `last_intended_to_cosign_block + 1`
let scan_start_block = last_intended_to_cosign_block + 1;
// As a practical optimization, we don't re-scan old blocks since old blocks are independent to
// new state
let scan_start_block = scan_start_block.max(ScanCosignFrom::get(&txn).unwrap_or(1));
// Check all blocks within the window to see if they should be cosigned
// If so, we're skipping them and need to flag them as skipped so that once the window closes, we
// do cosign them
// We only perform this check if we haven't already marked a block as skipped since the cosign
// the skipped block will cause will cosign all other blocks within this window
if skipped_block.is_none() {
let window_end_inclusive = window_end_exclusive - 1;
for b in scan_start_block ..= window_end_inclusive.min(latest_number) {
if block_has_events(&mut txn, serai, b).await? == HasEvents::Yes {
skipped_block = Some(b);
log::debug!("skipping cosigning {b} due to proximity to prior cosign");
IntendedCosign::set_skipped_cosign(&mut txn, b);
break;
}
}
}
// A block which should be cosigned
let mut to_cosign = None;
// A list of sets which are cosigning, along with a boolean of if we're in the set
let mut cosigning = vec![];
for block in scan_start_block ..= latest_number {
let actual_block = serai
.finalized_block_by_number(block)
.await?
.expect("couldn't get block which should've been finalized");
// Save the block number for this block, as needed by the cosigner to perform cosigning
SeraiBlockNumber::set(&mut txn, actual_block.hash(), &block);
if potentially_cosign_block(&mut txn, serai, block, skipped_block, window_end_exclusive).await?
{
to_cosign = Some((block, actual_block.hash()));
// Get the keys as of the prior block
// If this key sets new keys, the coordinator won't acknowledge so until we process this
// block
// We won't process this block until its co-signed
// Using the keys of the prior block ensures this deadlock isn't reached
let serai = serai.as_of(actual_block.header.parent_hash.into());
for network in serai_client::primitives::EXTERNAL_NETWORKS {
// Get the latest session to have set keys
let set_with_keys = {
let Some(latest_session) = serai.validator_sets().session(network.into()).await? else {
continue;
};
let prior_session = Session(latest_session.0.saturating_sub(1));
if serai
.validator_sets()
.keys(ExternalValidatorSet { network, session: prior_session })
.await?
.is_some()
{
ExternalValidatorSet { network, session: prior_session }
} else {
let set = ExternalValidatorSet { network, session: latest_session };
if serai.validator_sets().keys(set).await?.is_none() {
continue;
}
set
}
};
log::debug!("{:?} will be cosigning {block}", set_with_keys.network);
cosigning.push((set_with_keys, in_set(key, &serai, set_with_keys.into()).await?.unwrap()));
}
break;
}
// If this TX is committed, always start future scanning from the next block
ScanCosignFrom::set(&mut txn, &(block + 1));
// Since we're scanning *from* the next block, tidy the cache
BlockHasEventsCache::del(&mut txn, block);
}
if let Some((number, hash)) = to_cosign {
// If this block doesn't have cosigners, yet does have events, automatically mark it as
// cosigned
if cosigning.is_empty() {
log::debug!("{} had no cosigners available, marking as cosigned", number);
LatestCosignedBlock::set(&mut txn, &number);
} else {
for (set, in_set) in cosigning {
if in_set {
log::debug!("cosigning {number} with {:?} {:?}", set.network, set.session);
CosignTransactions::append_cosign(&mut txn, set, number, hash);
}
}
}
}
txn.commit();
Ok(())
}
pub async fn advance_cosign_protocol(
db: &mut impl Db,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
serai: &Serai,
latest_number: u64,
) -> Result<(), SeraiError> {
loop {
let scan_from = ScanCosignFrom::get(db).unwrap_or(1);
// Only scan 1000 blocks at a time to limit a massive txn from forming
let scan_to = latest_number.min(scan_from + 1000);
advance_cosign_protocol_inner(db, key, serai, scan_to).await?;
// If we didn't limit the scan_to, break
if scan_to == latest_number {
break;
}
}
Ok(())
}

View File

@@ -1,32 +0,0 @@
use serai_client::primitives::ExternalNetworkId;
pub use serai_db::*;
mod inner_db {
use super::*;
create_db!(
SubstrateDb {
NextBlock: () -> u64,
HandledEvent: (block: [u8; 32]) -> u32,
BatchInstructionsHashDb: (network: ExternalNetworkId, id: u32) -> [u8; 32]
}
);
}
pub(crate) use inner_db::{NextBlock, BatchInstructionsHashDb};
pub struct HandledEvent;
impl HandledEvent {
fn next_to_handle_event(getter: &impl Get, block: [u8; 32]) -> u32 {
inner_db::HandledEvent::get(getter, block).map_or(0, |last| last + 1)
}
pub fn is_unhandled(getter: &impl Get, block: [u8; 32], event_id: u32) -> bool {
let next = Self::next_to_handle_event(getter, block);
assert!(next >= event_id);
next == event_id
}
pub fn handle_event(txn: &mut impl DbTxn, block: [u8; 32], index: u32) {
assert!(Self::next_to_handle_event(txn, block) == index);
inner_db::HandledEvent::set(txn, block, &index);
}
}

View File

@@ -1,547 +0,0 @@
use core::{ops::Deref, time::Duration};
use std::{
sync::Arc,
collections::{HashSet, HashMap},
};
use zeroize::Zeroizing;
use dalek_ff_group::Ristretto;
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::{
coins::CoinsEvent,
in_instructions::InInstructionsEvent,
primitives::{BlockHash, ExternalNetworkId},
validator_sets::{
primitives::{ExternalValidatorSet, ValidatorSet},
ValidatorSetsEvent,
},
Block, Serai, SeraiError, TemporalSerai,
};
use serai_db::DbTxn;
use processor_messages::SubstrateContext;
use tokio::{sync::mpsc, time::sleep};
use crate::{
Db,
processors::Processors,
tributary::{TributarySpec, SeraiDkgCompleted},
};
mod db;
pub use db::*;
mod cosign;
pub use cosign::*;
async fn in_set(
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
serai: &TemporalSerai<'_>,
set: ValidatorSet,
) -> Result<Option<bool>, SeraiError> {
let Some(participants) = serai.validator_sets().participants(set.network).await? else {
return Ok(None);
};
let key = (Ristretto::generator() * key.deref()).to_bytes();
Ok(Some(participants.iter().any(|(participant, _)| participant.0 == key)))
}
async fn handle_new_set<D: Db>(
txn: &mut D::Transaction<'_>,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
new_tributary_spec: &mpsc::UnboundedSender<TributarySpec>,
serai: &Serai,
block: &Block,
set: ExternalValidatorSet,
) -> Result<(), SeraiError> {
if in_set(key, &serai.as_of(block.hash()), set.into())
.await?
.expect("NewSet for set which doesn't exist")
{
log::info!("present in set {:?}", set);
let set_data = {
let serai = serai.as_of(block.hash());
let serai = serai.validator_sets();
let set_participants =
serai.participants(set.network.into()).await?.expect("NewSet for set which doesn't exist");
set_participants.into_iter().map(|(k, w)| (k, u16::try_from(w).unwrap())).collect::<Vec<_>>()
};
let time = if let Ok(time) = block.time() {
time
} else {
assert_eq!(block.number(), 0);
// Use the next block's time
loop {
let Ok(Some(res)) = serai.finalized_block_by_number(1).await else {
sleep(Duration::from_secs(5)).await;
continue;
};
break res.time().unwrap();
}
};
// The block time is in milliseconds yet the Tributary is in seconds
let time = time / 1000;
// Since this block is in the past, and Tendermint doesn't play nice with starting chains after
// their start time (though it does eventually work), delay the start time by 120 seconds
// This is meant to handle ~20 blocks of lack of finalization for this first block
const SUBSTRATE_TO_TRIBUTARY_TIME_DELAY: u64 = 120;
let time = time + SUBSTRATE_TO_TRIBUTARY_TIME_DELAY;
let spec = TributarySpec::new(block.hash(), time, set, set_data);
log::info!("creating new tributary for {:?}", spec.set());
// Save it to the database now, not on the channel receiver's side, so this is safe against
// reboots
// If this txn finishes, and we reboot, then this'll be reloaded from active Tributaries
// If this txn doesn't finish, this will be re-fired
// If we waited to save to the DB, this txn may be finished, preventing re-firing, yet the
// prior fired event may have not been received yet
crate::ActiveTributaryDb::add_participating_in_tributary(txn, &spec);
new_tributary_spec.send(spec).unwrap();
} else {
log::info!("not present in new set {:?}", set);
}
Ok(())
}
async fn handle_batch_and_burns<Pro: Processors>(
txn: &mut impl DbTxn,
processors: &Pro,
serai: &Serai,
block: &Block,
) -> Result<(), SeraiError> {
// Track which networks had events with a Vec in ordr to preserve the insertion order
// While that shouldn't be needed, ensuring order never hurts, and may enable design choices
// with regards to Processor <-> Coordinator message passing
let mut networks_with_event = vec![];
let mut network_had_event = |burns: &mut HashMap<_, _>, batches: &mut HashMap<_, _>, network| {
// Don't insert this network multiple times
// A Vec is still used in order to maintain the insertion order
if !networks_with_event.contains(&network) {
networks_with_event.push(network);
burns.insert(network, vec![]);
batches.insert(network, vec![]);
}
};
let mut batch_block = HashMap::new();
let mut batches = HashMap::<ExternalNetworkId, Vec<u32>>::new();
let mut burns = HashMap::new();
let serai = serai.as_of(block.hash());
for batch in serai.in_instructions().batch_events().await? {
if let InInstructionsEvent::Batch { network, id, block: network_block, instructions_hash } =
batch
{
network_had_event(&mut burns, &mut batches, network);
BatchInstructionsHashDb::set(txn, network, id, &instructions_hash);
// Make sure this is the only Batch event for this network in this Block
assert!(batch_block.insert(network, network_block).is_none());
// Add the batch included by this block
batches.get_mut(&network).unwrap().push(id);
} else {
panic!("Batch event wasn't Batch: {batch:?}");
}
}
for burn in serai.coins().burn_with_instruction_events().await? {
if let CoinsEvent::BurnWithInstruction { from: _, instruction } = burn {
let network = instruction.balance.coin.network();
network_had_event(&mut burns, &mut batches, network);
// network_had_event should register an entry in burns
burns.get_mut(&network).unwrap().push(instruction);
} else {
panic!("Burn event wasn't Burn: {burn:?}");
}
}
assert_eq!(HashSet::<&_>::from_iter(networks_with_event.iter()).len(), networks_with_event.len());
for network in networks_with_event {
let network_latest_finalized_block = if let Some(block) = batch_block.remove(&network) {
block
} else {
// If it's had a batch or a burn, it must have had a block acknowledged
serai
.in_instructions()
.latest_block_for_network(network)
.await?
.expect("network had a batch/burn yet never set a latest block")
};
processors
.send(
network,
processor_messages::substrate::CoordinatorMessage::SubstrateBlock {
context: SubstrateContext {
serai_time: block.time().unwrap() / 1000,
network_latest_finalized_block,
},
block: block.number(),
burns: burns.remove(&network).unwrap(),
batches: batches.remove(&network).unwrap(),
},
)
.await;
}
Ok(())
}
// Handle a specific Substrate block, returning an error when it fails to get data
// (not blocking / holding)
#[allow(clippy::too_many_arguments)]
async fn handle_block<D: Db, Pro: Processors>(
db: &mut D,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
new_tributary_spec: &mpsc::UnboundedSender<TributarySpec>,
perform_slash_report: &mpsc::UnboundedSender<ExternalValidatorSet>,
tributary_retired: &mpsc::UnboundedSender<ExternalValidatorSet>,
processors: &Pro,
serai: &Serai,
block: Block,
) -> Result<(), SeraiError> {
let hash = block.hash();
// Define an indexed event ID.
let mut event_id = 0;
// If a new validator set was activated, create tributary/inform processor to do a DKG
for new_set in serai.as_of(hash).validator_sets().new_set_events().await? {
// Individually mark each event as handled so on reboot, we minimize duplicates
// Additionally, if the Serai connection also fails 1/100 times, this means a block with 1000
// events will successfully be incrementally handled
// (though the Serai connection should be stable, making this unnecessary)
let ValidatorSetsEvent::NewSet { set } = new_set else {
panic!("NewSet event wasn't NewSet: {new_set:?}");
};
// We only coordinate/process external networks
let Ok(set) = ExternalValidatorSet::try_from(set) else { continue };
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh new set event {:?}", new_set);
let mut txn = db.txn();
handle_new_set::<D>(&mut txn, key, new_tributary_spec, serai, &block, set).await?;
HandledEvent::handle_event(&mut txn, hash, event_id);
txn.commit();
}
event_id += 1;
}
// If a key pair was confirmed, inform the processor
for key_gen in serai.as_of(hash).validator_sets().key_gen_events().await? {
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh key gen event {:?}", key_gen);
let ValidatorSetsEvent::KeyGen { set, key_pair } = key_gen else {
panic!("KeyGen event wasn't KeyGen: {key_gen:?}");
};
let substrate_key = key_pair.0 .0;
processors
.send(
set.network,
processor_messages::substrate::CoordinatorMessage::ConfirmKeyPair {
context: SubstrateContext {
serai_time: block.time().unwrap() / 1000,
network_latest_finalized_block: serai
.as_of(block.hash())
.in_instructions()
.latest_block_for_network(set.network)
.await?
// The processor treats this as a magic value which will cause it to find a network
// block which has a time greater than or equal to the Serai time
.unwrap_or(BlockHash([0; 32])),
},
session: set.session,
key_pair,
},
)
.await;
// TODO: If we were in the set, yet were removed, drop the tributary
let mut txn = db.txn();
SeraiDkgCompleted::set(&mut txn, set, &substrate_key);
HandledEvent::handle_event(&mut txn, hash, event_id);
txn.commit();
}
event_id += 1;
}
for accepted_handover in serai.as_of(hash).validator_sets().accepted_handover_events().await? {
let ValidatorSetsEvent::AcceptedHandover { set } = accepted_handover else {
panic!("AcceptedHandover event wasn't AcceptedHandover: {accepted_handover:?}");
};
let Ok(set) = ExternalValidatorSet::try_from(set) else { continue };
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh accepted handover event {:?}", accepted_handover);
// TODO: This isn't atomic with the event handling
// Send a oneshot receiver so we can await the response?
perform_slash_report.send(set).unwrap();
let mut txn = db.txn();
HandledEvent::handle_event(&mut txn, hash, event_id);
txn.commit();
}
event_id += 1;
}
for retired_set in serai.as_of(hash).validator_sets().set_retired_events().await? {
let ValidatorSetsEvent::SetRetired { set } = retired_set else {
panic!("SetRetired event wasn't SetRetired: {retired_set:?}");
};
let Ok(set) = ExternalValidatorSet::try_from(set) else { continue };
if HandledEvent::is_unhandled(db, hash, event_id) {
log::info!("found fresh set retired event {:?}", retired_set);
let mut txn = db.txn();
crate::ActiveTributaryDb::retire_tributary(&mut txn, set);
tributary_retired.send(set).unwrap();
HandledEvent::handle_event(&mut txn, hash, event_id);
txn.commit();
}
event_id += 1;
}
// Finally, tell the processor of acknowledged blocks/burns
// This uses a single event as unlike prior events which individually executed code, all
// following events share data collection
if HandledEvent::is_unhandled(db, hash, event_id) {
let mut txn = db.txn();
handle_batch_and_burns(&mut txn, processors, serai, &block).await?;
HandledEvent::handle_event(&mut txn, hash, event_id);
txn.commit();
}
Ok(())
}
#[allow(clippy::too_many_arguments)]
async fn handle_new_blocks<D: Db, Pro: Processors>(
db: &mut D,
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
new_tributary_spec: &mpsc::UnboundedSender<TributarySpec>,
perform_slash_report: &mpsc::UnboundedSender<ExternalValidatorSet>,
tributary_retired: &mpsc::UnboundedSender<ExternalValidatorSet>,
processors: &Pro,
serai: &Serai,
next_block: &mut u64,
) -> Result<(), SeraiError> {
// Check if there's been a new Substrate block
let latest_number = serai.latest_finalized_block().await?.number();
// Advance the cosigning protocol
advance_cosign_protocol(db, key, serai, latest_number).await?;
// Reduce to the latest cosigned block
let latest_number = latest_number.min(LatestCosignedBlock::latest_cosigned_block(db));
if latest_number < *next_block {
return Ok(());
}
for b in *next_block ..= latest_number {
let block = serai
.finalized_block_by_number(b)
.await?
.expect("couldn't get block before the latest finalized block");
log::info!("handling substrate block {b}");
handle_block(
db,
key,
new_tributary_spec,
perform_slash_report,
tributary_retired,
processors,
serai,
block,
)
.await?;
*next_block += 1;
let mut txn = db.txn();
NextBlock::set(&mut txn, next_block);
txn.commit();
log::info!("handled substrate block {b}");
}
Ok(())
}
pub async fn scan_task<D: Db, Pro: Processors>(
mut db: D,
key: Zeroizing<<Ristretto as Ciphersuite>::F>,
processors: Pro,
serai: Arc<Serai>,
new_tributary_spec: mpsc::UnboundedSender<TributarySpec>,
perform_slash_report: mpsc::UnboundedSender<ExternalValidatorSet>,
tributary_retired: mpsc::UnboundedSender<ExternalValidatorSet>,
) {
log::info!("scanning substrate");
let mut next_substrate_block = NextBlock::get(&db).unwrap_or_default();
/*
let new_substrate_block_notifier = {
let serai = &serai;
move || async move {
loop {
match serai.newly_finalized_block().await {
Ok(sub) => return sub,
Err(e) => {
log::error!("couldn't communicate with serai node: {e}");
sleep(Duration::from_secs(5)).await;
}
}
}
}
};
*/
// TODO: Restore the above subscription-based system
// That would require moving serai-client from HTTP to websockets
let new_substrate_block_notifier = {
let serai = &serai;
move |next_substrate_block| async move {
loop {
match serai.latest_finalized_block().await {
Ok(latest) => {
if latest.header.number >= next_substrate_block {
return latest;
}
sleep(Duration::from_secs(3)).await;
}
Err(e) => {
log::error!("couldn't communicate with serai node: {e}");
sleep(Duration::from_secs(5)).await;
}
}
}
}
};
loop {
// await the next block, yet if our notifier had an error, re-create it
{
let Ok(_) = tokio::time::timeout(
Duration::from_secs(60),
new_substrate_block_notifier(next_substrate_block),
)
.await
else {
// Timed out, which may be because Serai isn't finalizing or may be some issue with the
// notifier
if serai.latest_finalized_block().await.map(|block| block.number()).ok() ==
Some(next_substrate_block.saturating_sub(1))
{
log::info!("serai hasn't finalized a block in the last 60s...");
}
continue;
};
/*
// next_block is a Option<Result>
if next_block.and_then(Result::ok).is_none() {
substrate_block_notifier = new_substrate_block_notifier(next_substrate_block);
continue;
}
*/
}
match handle_new_blocks(
&mut db,
&key,
&new_tributary_spec,
&perform_slash_report,
&tributary_retired,
&processors,
&serai,
&mut next_substrate_block,
)
.await
{
Ok(()) => {}
Err(e) => {
log::error!("couldn't communicate with serai node: {e}");
sleep(Duration::from_secs(5)).await;
}
}
}
}
/// Gets the expected ID for the next Batch.
///
/// Will log an error and apply a slight sleep on error, letting the caller simply immediately
/// retry.
pub(crate) async fn expected_next_batch(
serai: &Serai,
network: ExternalNetworkId,
) -> Result<u32, SeraiError> {
async fn expected_next_batch_inner(
serai: &Serai,
network: ExternalNetworkId,
) -> Result<u32, SeraiError> {
let serai = serai.as_of_latest_finalized_block().await?;
let last = serai.in_instructions().last_batch_for_network(network).await?;
Ok(if let Some(last) = last { last + 1 } else { 0 })
}
match expected_next_batch_inner(serai, network).await {
Ok(next) => Ok(next),
Err(e) => {
log::error!("couldn't get the expected next batch from substrate: {e:?}");
sleep(Duration::from_millis(100)).await;
Err(e)
}
}
}
/// Verifies `Batch`s which have already been indexed from Substrate.
///
/// Spins if a distinct `Batch` is detected on-chain.
///
/// This has a slight malleability in that doesn't verify *who* published a `Batch` is as expected.
/// This is deemed fine.
pub(crate) async fn verify_published_batches<D: Db>(
txn: &mut D::Transaction<'_>,
network: ExternalNetworkId,
optimistic_up_to: u32,
) -> Option<u32> {
// TODO: Localize from MainDb to SubstrateDb
let last = crate::LastVerifiedBatchDb::get(txn, network);
for id in last.map_or(0, |last| last + 1) ..= optimistic_up_to {
let Some(on_chain) = BatchInstructionsHashDb::get(txn, network, id) else {
break;
};
let off_chain = crate::ExpectedBatchDb::get(txn, network, id).unwrap();
if on_chain != off_chain {
// Halt operations on this network and spin, as this is a critical fault
loop {
log::error!(
"{}! network: {:?} id: {} off-chain: {} on-chain: {}",
"on-chain batch doesn't match off-chain",
network,
id,
hex::encode(off_chain),
hex::encode(on_chain),
);
sleep(Duration::from_secs(60)).await;
}
}
crate::LastVerifiedBatchDb::set(txn, network, &id);
}
crate::LastVerifiedBatchDb::get(txn, network)
}

View File

@@ -1,125 +0,0 @@
use core::fmt::Debug;
use std::{
sync::Arc,
collections::{VecDeque, HashSet, HashMap},
};
use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet};
use processor_messages::CoordinatorMessage;
use async_trait::async_trait;
use tokio::sync::RwLock;
use crate::{
processors::{Message, Processors},
TributaryP2p, ReqResMessageKind, GossipMessageKind, P2pMessageKind, Message as P2pMessage, P2p,
};
pub mod tributary;
#[derive(Clone)]
pub struct MemProcessors(pub Arc<RwLock<HashMap<ExternalNetworkId, VecDeque<CoordinatorMessage>>>>);
impl MemProcessors {
#[allow(clippy::new_without_default)]
pub fn new() -> MemProcessors {
MemProcessors(Arc::new(RwLock::new(HashMap::new())))
}
}
#[async_trait::async_trait]
impl Processors for MemProcessors {
async fn send(&self, network: ExternalNetworkId, msg: impl Send + Into<CoordinatorMessage>) {
let mut processors = self.0.write().await;
let processor = processors.entry(network).or_insert_with(VecDeque::new);
processor.push_back(msg.into());
}
async fn recv(&self, _: ExternalNetworkId) -> Message {
todo!()
}
async fn ack(&self, _: Message) {
todo!()
}
}
#[allow(clippy::type_complexity)]
#[derive(Clone, Debug)]
pub struct LocalP2p(
usize,
pub Arc<RwLock<(HashSet<Vec<u8>>, Vec<VecDeque<(usize, P2pMessageKind, Vec<u8>)>>)>>,
);
impl LocalP2p {
pub fn new(validators: usize) -> Vec<LocalP2p> {
let shared = Arc::new(RwLock::new((HashSet::new(), vec![VecDeque::new(); validators])));
let mut res = vec![];
for i in 0 .. validators {
res.push(LocalP2p(i, shared.clone()));
}
res
}
}
#[async_trait]
impl P2p for LocalP2p {
type Id = usize;
async fn subscribe(&self, _set: ExternalValidatorSet, _genesis: [u8; 32]) {}
async fn unsubscribe(&self, _set: ExternalValidatorSet, _genesis: [u8; 32]) {}
async fn send_raw(&self, to: Self::Id, msg: Vec<u8>) {
let mut msg_ref = msg.as_slice();
let kind = ReqResMessageKind::read(&mut msg_ref).unwrap();
self.1.write().await.1[to].push_back((self.0, P2pMessageKind::ReqRes(kind), msg_ref.to_vec()));
}
async fn broadcast_raw(&self, kind: P2pMessageKind, msg: Vec<u8>) {
// Content-based deduplication
let mut lock = self.1.write().await;
{
let already_sent = &mut lock.0;
if already_sent.contains(&msg) {
return;
}
already_sent.insert(msg.clone());
}
let queues = &mut lock.1;
let kind_len = (match kind {
P2pMessageKind::ReqRes(kind) => kind.serialize(),
P2pMessageKind::Gossip(kind) => kind.serialize(),
})
.len();
let msg = msg[kind_len ..].to_vec();
for (i, msg_queue) in queues.iter_mut().enumerate() {
if i == self.0 {
continue;
}
msg_queue.push_back((self.0, kind, msg.clone()));
}
}
async fn receive(&self) -> P2pMessage<Self> {
// This is a cursed way to implement an async read from a Vec
loop {
if let Some((sender, kind, msg)) = self.1.write().await.1[self.0].pop_front() {
return P2pMessage { sender, kind, msg };
}
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
}
}
}
#[async_trait]
impl TributaryP2p for LocalP2p {
async fn broadcast(&self, genesis: [u8; 32], msg: Vec<u8>) {
<Self as P2p>::broadcast(
self,
P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)),
msg,
)
.await
}
}

Some files were not shown because too many files have changed in this diff Show More