503 Commits

Author SHA1 Message Date
Luke Parker
ee9b9778b5 Patch lazy_static to std::sync::LazyLock
This does remove the no-`std` variant of `lazy_static`, but that was unused and
`std` was not additively implemented (making it poor to work with).

This primarily tidies our `deny.toml` with one less `git` dependency.
2025-12-11 03:50:29 -05:00
Luke Parker
5a3cf1f2be Dispatch InInstruction as expected 2025-12-11 03:45:17 -05:00
Luke Parker
2fbe925c4d Expand the stack size CI with macOS, oksh, and osh
Fixes the installation of `gash`.

Attempted `posh` on macOS and `mrsh`, leading to
https://github.com/serai-dex/serai/issues/703 and
https://github.com/serai-dex/serai/issues/704 respectively. Attempted `gosh`
leading to https://github.com/u-root/u-root/issues/3474. Attempted `nsh` but
hit https://github.com/nuta/nsh/issues/49 (while `nsh` appears no longer under
development, meaning that's unlikely to be fixed).

A future improvement would be to provide `elf.h` on macOS, enabling using
`chelf` and restoring it as the source of truth.
2025-12-10 02:04:16 -05:00
Luke Parker
6aad496d86 find -name > find -iname 2025-12-10 01:56:38 -05:00
Luke Parker
d3464cfcb3 Extend Monero action with support for macOS 2025-12-09 23:07:29 -05:00
Luke Parker
8dbea8452d Intentionally attempt to produce finalizations every single block
While aggressive, the existing value (given the documentation for it) is far
too large to be reasonable here.
2025-12-09 21:59:20 -05:00
Luke Parker
f94b7ca50e Add verification of SignedBatch to the in-instructions pallet 2025-12-09 21:58:18 -05:00
Luke Parker
5e39f9bc1e Add gash as a shell tested with
Its notable as shell used within Guix.
2025-12-09 20:26:23 -05:00
Luke Parker
c98d757c0f Ensure the signed arithmetic won't overflow, expand shells tested with (#701)
* Normalize naming for the stack size CI file

* Extend stack size CI with `posh` and `lksh`

`posh` is a derivative of `pdksh` explicitly intended for ensuring
Debian-policy-compliance.

`lksh` is more-POSIX-esque, legacy shell included along-side `mksh`.

* Ensure a signed long overflow won't occur

Also fixes `write_bytes` when the written bytes have alphabetical digits when
encoded as hexadecimal.

* Improve sh semantics in stack-size workflow
2025-12-09 04:03:24 -05:00
Luke Parker
6603100c7e Add a CI for increase_default_stack_size.sh
This runs whenever the script is modified, or weekly to ensure the CI doesn't
inadvertently decay (due to using the latest packages for a variety of shells).

This runs with `sh` (presumably `dash`), `ksh`, `bash`, `dash` (explicitly),
`zsh`, `ash` (Busybox), `hush` (Busybox), `mksh`, `yash`, and `brush`. While
none of these guarantee this script is POSIX-compliant, as a fully and
explicitly-only POSIX-compliant environment is not constructed, this does
reasonably test the script itself to be POSIX-compliant. The tools called have
been reviewed for being used to the POSIX standard (although not audited to
that degree).

The script itself is modified with the following changes for compliance with
POSIX:
1) `hexdump` is replaced with `od` (`od` suggested by @PlasmaPower)
2) `printf \xFF` replaced with octal escapes, as `\x` is not part of POSIX
3) `head -c` is replaced with `cut`, as the `-c` option is not standardized
   under POSIX (despite it being present for `tail`). This was identified by
   @PlasmaPower. As we used `head -c-2` to truncate the last two characters of
   a string, we now use `wc -c` for a `strlen` to enable the necessary
   arithmetic to calculate what two bytes in from the end of the string is.

This entire effort can be argued pointless, as we could simply run `monerod` on
Debian. This script is useful, the journey down the rabbithole of POSIX
compliance fascinating, and the methodology applicable to other potential
futures though (whether running binaries on Alpine or testing other `sh`
scripts for their portability). As part of this effort overall, our CI was
extended with `shellcheck` for all `sh` scripts in-tree, including all of our
existing `sh` scripts. That there is an actual, direct benefit past this
specific effort.
2025-12-09 00:57:26 -05:00
Luke Parker
f70fee65b8 Add shellcheck to the CI
Updates our scripts to pass. Achieves POSIX compliance for
`increase_default_stack_size.sh` via replacing `hexdump` with `od` and `tr`.
Replaces the non-POSIX `dd status=none` with the POSIX `dd 2> /dev/null`.
2025-12-08 20:04:23 -05:00
Luke Parker
0849d60f28 Run Bitcoin, Monero nodes on Alpine
While prior this didn't work well, presumably due to stack size limitations,
a shell script is included to raise the default stack size limit. This should
be tried again.
2025-12-08 02:30:34 -05:00
Luke Parker
3a792f9ce5 Update documentation on the serai-runtime build.rs 2025-12-08 02:22:29 -05:00
Luke Parker
50959fa0e3 Update the polkadot-sdk used
Removes `parity-wasm` as a dependency, closing
https://github.com/serai-dex/issues/227 and tidying our `deny.toml`.

This removes the `import-memory` flag from the linker as part of
`parity-wasm`'s usage was to map imports into exports
(5a1128b94b/substrate/client/executor/common/src/runtime_blob/runtime_blob.rs (L91-L142)).
2025-12-08 02:22:25 -05:00
Luke Parker
2fb90ebe55 Extend crates we patch to be empty from the Ethereum ecosystem
`ruint` pulls in many versions of many crates. This has it pull in less.
2025-12-06 08:27:34 -05:00
Luke Parker
b24adcbd14 Add panic-on-poison to no-std std_shims::sync::Mutex
We already had this behavior on `std`. It was omitted when no-`std` due to
deferring to `spin::Mutex`, which does not track poisoning at all. This
increases the parity of the two.

Part of https://github.com/serai-dex/serai/issues/698.
2025-12-06 08:06:38 -05:00
Luke Parker
b791256648 Remove substrate-wasm-builder
By defining our own build script, we gain complete clarity and control over how
the WASM is built. This also removes the need to patch the upstream due to it
allowing pollution of the environment variables from the host.

Notable appreciation is given to
https://github.com/rust-lang/rust/issues/145491 for identifying an issue
encountered here, with the associated PR clarifying the necessary flags for the
linker to fix this.
2025-12-04 23:23:38 -05:00
Luke Parker
36ac9c56a4 Remove workaround for lack of musl-dev now that musl-dev is provided in Rust Alpine images
Additionally, optimizes the build process a bit via leaving only the runtime
(and `busybox`) in the final image, and additionally building the runtime
without `std` (as we solely need the WASM blob from this process).
2025-12-04 11:58:38 -05:00
Luke Parker
57bf4984f8 panic = "abort"
`panic = "unwind"` was originally a requirement of Substrate, notably due to
its [native runtime](https://github.com/paritytech/substrate/issues/10874).
This does not mean all of Serai should use this setting however.

As the native runtime has been removed, we do no longer need this for the
Substrate node. With a review of our derivative, a panic guard is only used
when fetching the version from the runtime, causing an error on boot if a
panic occurs. Accordingly, we shouldn't have a need for `panic = "unwind"`
within the node, and the runtime itself should be fine.

The rest of Serai's services already registered bespoke hooks to ensure any
panic caused the process to exit. Those are left as-is, even though they're
now unnecessary.
2025-12-04 11:58:38 -05:00
Luke Parker
87750407de cargo-deny 0.18.8, remove bip39 git dependency
The former is necessary due to `cargo-deny` misinterpreting select licenses.
The latter is finally possible with the recent 2.2.1 release 🎉
2025-12-04 11:58:28 -05:00
Luke Parker
3ce90c55d9 Define a 512 KiB block size limit 2025-12-02 21:24:05 -05:00
Luke Parker
ff95c58341 Round out the runtime
Ensures the block's size limit is respected.

Defines a policy for weights. While I'm unsure I want to commit to this
forever, I do want to acknowledge it's valid and well-defined.

Cleans up the `serai-runtime` crate a bit with further modules in the `wasm`
folder.
2025-12-02 21:16:34 -05:00
Luke Parker
98044f93b1 Stub the in-instructions pallet 2025-12-02 16:46:10 -05:00
Luke Parker
eb04f873d5 Stub the genesis-liquidity pallet 2025-12-02 16:46:06 -05:00
Luke Parker
af74c318aa Add event emissions to the DEX pallet 2025-12-02 13:31:33 -05:00
Luke Parker
d711d8915f Update docs Ruby/gem versions 2025-12-02 13:20:17 -05:00
Luke Parker
3d549564a8 Misc tweaks in the style of the last commit
Notably removes the `kvdb-rocksdb` patch via updating the Substrate version
used to one which disables the `jemalloc` feature itself.

Simplifies the path of the built WASM file within the Dockerfile to consumers.
This also ensures if the image is built, the path of the WASM file is as
expected (prior unasserted).
2025-12-02 09:10:44 -05:00
Luke Parker
9a75f92864 Thoroughly update versions and methodology
For hash-pinned dependencies, adds comments documenting the associated
versions.

Adds a pin to `slither-analyzer` which was prior missing.

Updates to Monero 0.18.4.4.

`mimalloc` now has the correct option set when building for `musl`. A C++
compiler is no longer required in its Docker image.

The runtime's `Dockerfile` now symlinks a `libc.so` already present on the
image instead of creating one itself. It also builds the runtime within the
image to ensure it only happens once. The test to ensure the methodology is
reproducible has been updated to not simply create containers from the image,
yet rebuild the image entirely, accordingly. This also is more robust and
arguably should have already been done.

The pin to the exact hash of the `patch-polkadot-sdk` repo in every
`Cargo.toml` has been removed. The lockfile already serves that role,
simplifying updating in the future.

The latest Rust nightly is adopted as well (superseding
https://github.com/serai-dex/serai/pull/697).

The `librocksdb-sys` patch is replaced with a `kvdb-rocksdb` patch, removing a
git dependency, thanks to https://github.com/paritytech/parity-common/pull/950.
2025-12-01 18:17:01 -05:00
Luke Parker
30ea9d9a06 Tidy the DEX pallet 2025-11-30 21:42:27 -05:00
Luke Parker
c45c973ca1 Remove musl-dev from runtime/Dockerfile
It wasn't pinned with a hash yet with a version tag. This ensures we are
deterministic to the image (specified by hash), `Cargo.lock`, and source code
alone.

Unfortunately, this was incredibly annoying to do, the exact process uncovering
a SIGSEGV in stable Rust. The extensive documentation details the solution.
Thankfully, it works now.
2025-11-27 03:37:37 -05:00
Luke Parker
6e37ac030d Add patch for alloy-eip2124 to an empty crate
Removes the `crc` dependency which had a unique author associated.
2025-11-26 17:01:03 -05:00
Luke Parker
e7c759c468 Improve substrate-median tests
The use of a dedicated test module ensures the API doesn't hide anything which
needs to be public. There's also now explicit tests for when the median is the
popped value.
2025-11-25 23:46:12 -05:00
Luke Parker
8ec0582237 Add module to calculate medians 2025-11-25 22:39:52 -05:00
Luke Parker
8d8e8a7a77 Remove unnecessary MSRVs from patches/ 2025-11-25 17:05:30 -05:00
Luke Parker
028ec3cce0 borsh 1.6.0
Bumps th MSRV for some of our crates, which is fine.
2025-11-25 16:58:19 -05:00
Luke Parker
c49215805f Update Substrate 2025-11-25 00:06:54 -05:00
Luke Parker
2ffdd2a01d Update monero-oxide, Substrate 2025-11-22 11:49:25 -05:00
Luke Parker
e1e6e67d4a Ensure desired pruning behavior is held within the node 2025-11-18 21:46:58 -05:00
Luke Parker
6b19780c7b Remove historical state access from Serai
Resolves https://github.com/serai-dex/serai/issues/694.
2025-11-18 21:27:47 -05:00
Luke Parker
6100c3ca90 Restore patches/dalek-ff-group
Ensures `crypto/dalek-ff-group` is pure.
2025-11-16 19:04:57 -05:00
Luke Parker
fa0ed4b180 Add validator sets RPC functions necessary for the coordinator 2025-11-16 17:38:08 -05:00
Luke Parker
0ea16f9e01 doc_auto_cfg -> doc_cfg 2025-11-16 17:38:08 -05:00
Luke Parker
7a314baa9f Update all of serai-coordinator to compile with the new serai-client-serai 2025-11-16 17:38:03 -05:00
Luke Parker
9891ccade8 Add From<*::Call> for Call to serai-abi 2025-11-16 16:43:06 -05:00
Luke Parker
f1f166c168 Restore publish_transaction RPC to Serai 2025-11-16 16:43:06 -05:00
Luke Parker
df4aee2d59 Update serai-client to solely be an umbrella crate of the dedicated client libraries 2025-11-16 16:43:05 -05:00
Luke Parker
302a43653f Add helper to get TemporalSerai as of the latest finalized block 2025-11-16 16:42:36 -05:00
Luke Parker
d219b77bd0 Add bindings to the events from the coins module to serai-client-serai 2025-11-15 16:13:25 -05:00
Luke Parker
fce26eaee1 Update the block RPCs to return null when missing, not an error
Promotes clarity.
2025-11-15 16:12:39 -05:00
Luke Parker
3cfbd9add7 Update serai-coordinator-cosign to the new serai-client-serai 2025-11-15 16:10:58 -05:00
Luke Parker
609cf06393 Update SetDecided to include the validators
Necessary for light-client protocols to follow along with consensus. Arguably,
anyone handling GRANDPA's consensus could peek at this through the consensus
commit anyways, but we shouldn't so defer that.
2025-11-15 14:59:33 -05:00
Luke Parker
46b1f1b7ec Add test for the integrity of headers 2025-11-14 12:04:21 -05:00
Luke Parker
09113201e7 Fixes to the validator sets RPC 2025-11-14 11:19:02 -05:00
Luke Parker
556d294157 Add pallet-timestamp
Ensures the timestamp is sent, within expected parameters, and the correctness
in relation to `pallet-babe`.
2025-11-14 09:59:32 -05:00
Luke Parker
82ca889ed3 Wrap Proposer so we can add the SeraiPreExecutionDigest (timestamps) 2025-11-14 08:02:54 -05:00
Luke Parker
cde0f753c2 Correct Serai header on genesis
Includes a couple misc fixes for the RPC as well.
2025-11-14 07:22:59 -05:00
Luke Parker
6ff0ef7aa6 Type the errors yielded by serai-node's RPC 2025-11-14 03:52:35 -05:00
Luke Parker
f9e3d1b142 Expand validator sets API with the rest of the events and some getters
We could've added a storage API, and fetched fields that way, except we want
the storage to be opaque. That meant we needed to add the RPC routes to the
node, which also simplifies other people writing RPC code and fetching these
fields. Then the node could've used the storage API, except a lot of the
storage in validator-sets is marked opaque and to only be read via functions,
so extending the runtime made the most sense.
2025-11-14 03:37:06 -05:00
Luke Parker
a793aa18ef Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-13 05:48:18 -05:00
Luke Parker
5662beeb8a Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-13 05:25:41 -05:00
Luke Parker
509bd58f4e Add method to fetch a block's events to the RPC 2025-11-13 05:13:55 -05:00
Luke Parker
367a5769e8 Update deny.toml 2025-11-13 00:37:51 -05:00
Luke Parker
cb6eb6430a Update version of substrate 2025-11-13 00:17:19 -05:00
Luke Parker
4f82e5912c Use Alpine to build the runtime
Smaller, works without issue.
2025-11-12 23:02:22 -05:00
Luke Parker
ac7af40f2e Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-12 23:00:57 -05:00
Luke Parker
264bdd46ca Update serai-runtime to compile a minimum subset of itself for non-WASM targets
We only really care about it as a WASM blob, given `serai-abi`, so there's no
need to compile it twice when it's an expensive blob and we don't care about it
at all.
2025-11-12 23:00:00 -05:00
Luke Parker
c52f7634de Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-11 23:04:05 -05:00
Luke Parker
21eaa5793d Error when not running the dev network if an explicit WASM runtime isn't provided 2025-11-11 23:02:20 -05:00
Luke Parker
c744a80d80 Check the finalized function doesn't claim unfinalized blocks are in fact finalized 2025-11-11 23:02:20 -05:00
Luke Parker
a34f9f6164 Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-11 23:02:20 -05:00
Luke Parker
353683cfd2 revm 33 2025-11-11 23:02:16 -05:00
Luke Parker
d4f77159c4 Rust 1.91.1 due to the regression re: wasm builds 2025-11-11 09:08:30 -05:00
Luke Parker
191bf4bdea Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-10 06:34:33 -05:00
Luke Parker
06a4824aba Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-10 05:31:13 -05:00
Luke Parker
e65a37e639 Update various versions 2025-11-10 04:02:02 -05:00
Luke Parker
4653ef4a61 Use dockertest for the newly added serai-client-serai test 2025-11-07 02:08:02 -05:00
Luke Parker
ce08fad931 Add initial basic tests for serai-client-serai 2025-11-06 20:12:37 -05:00
Luke Parker
1866bb7ae3 Begin work on the new RPC for the new node 2025-11-06 03:08:43 -05:00
Luke Parker
aff2065c31 Polkadot stable2509-1 2025-11-06 00:23:35 -05:00
Luke Parker
7300700108 Update misc versions 2025-11-05 19:11:33 -05:00
Luke Parker
31874ceeae serai-node which compiles and produces/finalizes blocks with --dev 2025-11-05 18:20:23 -05:00
Luke Parker
012b8fddae Get serai-node to compile again 2025-11-05 01:18:21 -05:00
Luke Parker
d2f58232c8 Tweak serai-coordinator-cosign to make it closer to compiling again
Adds `PartialOrd, Ord` derivations to some items in `serai-primitives` so they
may be used as keys within `borsh` maps.
2025-11-04 19:42:23 -05:00
Luke Parker
49794b6a75 Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-04 19:28:26 -05:00
Luke Parker
973287d0a1 Smash serai-client so the processors don't need the entire lib to access their specific code
We prior controlled this with feature flags. It's just better to define their
own crates.
2025-11-04 19:27:53 -05:00
Luke Parker
1b499edfe1 Misc fixes so this compiles 2025-11-04 18:56:56 -05:00
Luke Parker
642848bd24 Bump revm 2025-11-04 13:31:46 -05:00
Luke Parker
f7fb78bdd6 Merge branch 'next' into next-polkadot-sdk 2025-11-04 13:24:00 -05:00
Luke Parker
9c47ef2658 Restore deny exception for kayabaNerve/elliptic-curves, accidentally dropped when merging develop 2025-11-04 13:18:59 -05:00
Luke Parker
e1b6b638c6 Merge branch 'develop' into next 2025-11-04 13:14:38 -05:00
Luke Parker
c24768f922 Fix borks from the latest nightly
The `cargo doc` build started to fail with the rolling of `doc_auto_cfg` into
`doc_cfg`, so now we don't build docs for deps (as we can't reasonably update
`generic-array` at this time).

`home` has been patched as we are able to, not as a direct requirement of this
PR.
2025-11-04 13:10:11 -05:00
Luke Parker
65613750e1 Merge branch 'next' into next-polkadot-sdk 2025-11-04 12:06:13 -05:00
Luke Parker
87ee879dea doc_auto_cfg -> doc_cfg 2025-11-04 10:20:17 -05:00
Luke Parker
b5603560e8 Merge branch 'develop' into next 2025-11-04 10:19:38 -05:00
Luke Parker
5818f1a41c Update nightly version 2025-11-04 10:05:08 -05:00
Luke Parker
1b781b4b57 Fix CI 2025-10-07 04:39:32 -04:00
Luke Parker
94faf098b6 Update nightly version 2025-10-05 18:44:04 -04:00
Luke Parker
03e45f73cd Merge branch 'develop' into next 2025-10-05 18:43:53 -04:00
Luke Parker
63f7e220c0 Update macOS labels in CI due to deprecation of macos-13 2025-10-05 10:59:40 -04:00
Luke Parker
7d49366373 Move develop to patch-polkadot-sdk (#678)
* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
2025-10-05 10:58:08 -04:00
Luke Parker
56f6ba2dac Merge branch 'develop' into next-polkadot-sdk 2025-10-05 06:04:17 -04:00
Luke Parker
55ed33d2d1 Update to a version of Substrate which no longer cites our fork of substrate-bip39 2025-09-30 19:30:40 -04:00
Luke Parker
138a0e9b40 Resolve https://github.com/serai-dex/serai/issues/680 2025-09-30 01:05:17 -04:00
Luke Parker
4fc7263ac3 Make simple_request::Client generic to the executor
Part of https://github.com/serai-dex/serai/issues/682.

We don't remove the use of `tokio::sync::Mutex` now as `hyper` pulls in
`tokio::sync` anyways, so there's no point in replacing it. This doesn't yet
solve TLS for non-`tokio` `Client`s.
2025-09-30 01:05:12 -04:00
Luke Parker
f27fd59fa6 Update documentation within modular-frost
Resolves https://github.com/serai-dex/serai/issues/675.
2025-09-30 00:27:29 -04:00
Luke Parker
08f6af8bb9 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-09-27 02:07:18 -04:00
Luke Parker
3512b3832d Don't actually define a pallet for pallet-session
We need its `Config`, not it as a pallet. As it has no storage, no calls, no
events, this is fine.
2025-09-26 23:08:38 -04:00
Luke Parker
1164f92ea1 Update usage of now-associated const in processor/key-gen 2025-09-26 22:48:52 -04:00
Luke Parker
0a3ead0e19 Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-09-26 22:47:47 -04:00
Luke Parker
437f0e9a93 Remove serdect by removing the unnecessary "alloc" feature from crypto-bigint
This only works for the legacy `crypto-bigint` and downstream consumers who
don't have the modern `serdect` pulled in for independent reasons.
2025-09-26 20:58:45 -04:00
Luke Parker
cc5d38f1ce dkg-evrf Security Proofs (#681)
* Add audit statement for `dkg-evrf`

This doesn't cover the implementation, solely the academia and background.

Also moves the existing audit of the `crypto` folder for organizational
reasons.

* Add files via upload
2025-09-26 11:20:48 -04:00
Luke Parker
0ce025e0c2 Update build-dependencies CI action 2025-09-21 15:40:58 -04:00
Luke Parker
ea66cd0d1a Update build-dependencies CI action 2025-09-21 15:40:15 -04:00
Luke Parker
8b32fba458 Minor cargo update 2025-09-21 15:40:05 -04:00
Luke Parker
e63acf3f67 Restore a runtime which compiles
Adds BABE, GRANDPA, to the runtime definition and a few stubs for not yet
implemented interfaces.
2025-09-21 13:16:43 -04:00
Luke Parker
d373d2a4c9 Restore AllowMint to serai-validator-sets-pallet and reorganize TODOs 2025-09-20 04:45:28 -04:00
Luke Parker
cbf998ff30 Restore report_slashes
This does not yet handle the `SlashReport`. It solely handles the routing for
it.
2025-09-20 04:16:01 -04:00
Luke Parker
ef07253a27 Restore the event to serai-validator-sets-pallet 2025-09-20 03:42:24 -04:00
Luke Parker
ffae6753ec Restore the set_keys call 2025-09-20 03:04:26 -04:00
Luke Parker
a04215bc13 Remove commented-out slashing code from serai-validator-sets-pallet
Deferred to https://github.com/serai-dex/serai/issues/657.
2025-09-20 03:04:19 -04:00
Luke Parker
28aea8a442 Incorporate check a validator won't prevent ever not having a single point of failure 2025-09-20 01:58:39 -04:00
Luke Parker
7b46477ca0 Add explicit hook for deciding whether to include the genesis validators 2025-09-20 01:57:55 -04:00
Luke Parker
e62b62ddfb Restore usage of pallet-grandpa to serai-validator-sets-pallet 2025-09-20 01:36:11 -04:00
Luke Parker
a2d8d0fd13 Restore integration with pallet-babe to serai-validator-sets-pallet 2025-09-20 01:23:02 -04:00
Luke Parker
b2b36b17c4 Restore GenesisConfig to the validator sets pallet 2025-09-20 00:06:19 -04:00
Luke Parker
9de8394efa Emit events within the signals pallet 2025-09-19 22:44:29 -04:00
Luke Parker
3cb9432daa Have the coins pallet emit events via serai_core_pallet
`serai_core_pallet` solely defines an accumulator for the events. We use the
traditional `frame_system::Events` to store them for now and enable retrieval.
2025-09-19 22:18:55 -04:00
Luke Parker
3f5150b3fa Properly define the core pallet instead of placing it within the runtime 2025-09-19 19:05:47 -04:00
Luke Parker
d74b00b9e4 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:09:22 -04:00
Luke Parker
224cf4ea21 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:00:10 -04:00
Luke Parker
3955f92cc2 Merge branch 'next' into next-polkadot-sdk 2025-09-18 18:19:14 -04:00
Luke Parker
a9b1e5293c Support webpki-roots as a fallback in simple-request 2025-09-18 18:15:24 -04:00
Luke Parker
80009ab67f Tidy unused import 2025-09-18 17:49:37 -04:00
Luke Parker
df9fda2971 Fixes from errors in cherry-picked commits 2025-09-18 17:49:32 -04:00
Luke Parker
ca8afb83a1 simple-request 0.2.0 2025-09-18 17:41:31 -04:00
Luke Parker
18a9cf2535 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:41:31 -04:00
Luke Parker
10c126ad92 Misc updates 2025-09-18 17:41:25 -04:00
Luke Parker
19305aebc9 Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-18 17:06:57 -04:00
Luke Parker
be68e27551 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-18 17:06:42 -04:00
Luke Parker
d6d96fe8ff Correct std-shims feature flagging 2025-09-18 17:06:31 -04:00
Luke Parker
95909d83a4 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-18 17:06:05 -04:00
Luke Parker
3bd48974f3 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-18 17:05:19 -04:00
Luke Parker
29093715e3 Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-18 17:05:07 -04:00
Luke Parker
87b4dfc8f3 Expand std_shims::prelude to better match std::prelude 2025-09-18 17:04:54 -04:00
Luke Parker
4db78b1787 Add the ability to bound the response's size limit to simple-request 2025-09-18 17:04:41 -04:00
Luke Parker
02a5f15535 Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-18 17:04:10 -04:00
Luke Parker
a1ef18a039 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:03:16 -04:00
Luke Parker
bec806230a Misc updates 2025-09-18 16:25:33 -04:00
Luke Parker
8bafeab5b3 Tidy serai-signals-pallet
Adds `serai-validator-sets-pallet` and `serai-signals-pallet` to the runtime.
2025-09-16 08:45:02 -04:00
Luke Parker
3722df7326 Introduce KeyShares struct to represent the amount of key shares
Improvements, bug fixes associated.
2025-09-16 08:45:02 -04:00
Luke Parker
ddb8e1398e Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-16 08:45:02 -04:00
Luke Parker
2be69b23b1 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-16 08:45:02 -04:00
Luke Parker
a82ccadbb0 Correct std-shims feature flagging 2025-09-16 08:45:02 -04:00
Luke Parker
1ff2934927 cargo update 2025-09-16 08:44:54 -04:00
Luke Parker
cd4ffa862f Remove coins, validator-sets use of Substrate's event system
We've defined our own.
2025-09-15 21:32:20 -04:00
Luke Parker
c0a4d85ae6 Restore claim_deallocation call to validator-sets pallet 2025-09-15 21:32:01 -04:00
Luke Parker
55e845fe12 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-15 21:24:10 -04:00
Luke Parker
5ea087d177 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-14 08:55:40 -04:00
Luke Parker
dd7dc0c1dc Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-12 18:26:27 -04:00
Luke Parker
c83fbb3e44 Expand std_shims::prelude to better match std::prelude 2025-09-12 18:24:56 -04:00
Luke Parker
befbbbfb84 Add the ability to bound the response's size limit to simple-request 2025-09-11 17:24:47 -04:00
Luke Parker
d0f497dc68 Latest patch-polkadot-sdk 2025-09-10 10:02:24 -04:00
Luke Parker
1b755a5d48 patch-polkadot-sdk enabling libp2p 0.56 2025-09-06 17:41:49 -04:00
Luke Parker
e5efcd56ba Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-06 14:43:21 -04:00
Luke Parker
5d60b3c2ae Update parity-db in serai-db
This synchronizes with an update to `patch-polkadot-sdk`.
2025-09-06 14:28:42 -04:00
Luke Parker
ae923b24ff Update `patch-polkadot-sdk
Allows using `libp2p 0.55`.
2025-09-06 14:04:55 -04:00
Luke Parker
d304cd97e1 Merge branch 'next' into next-polkadot-sdk 2025-09-06 04:26:10 -04:00
Luke Parker
2b56dcdf3f Update patch-polkadot-sdk for bug fixes, removal of is-terminal
Adds a deny entry for `is-terminal` to stop it from secretly reappearing.

Restores the `is-terminal` patch for `is_terminal_polyfill` to have one less
external dependency.
2025-09-06 04:25:21 -04:00
Luke Parker
865e351f96 Bitcoin 29.1
Benefits from `v2transport`, `mempoolfullrbf`, and potentially TRUC.
2025-09-06 04:09:39 -04:00
Luke Parker
ea275df26c Re-export curve25519_dalek::RistrettoPoint for dalek_ff_group::RistrettoPoint
Sacrifices a `Hash` implementation (inefficient and already shouldn't be used)
we appear to have only used in two files (which have been patched).
2025-09-05 17:40:44 -04:00
Luke Parker
90804c4c30 Update deny.toml 2025-09-05 14:08:04 -04:00
Luke Parker
46caca2f51 Update patch-polkadot-sdk to remove scale_info 2025-09-05 14:07:52 -04:00
Luke Parker
2077e485bb Add borsh impls for SignedEmbeddedEllipticCurveKeys 2025-09-05 07:21:07 -04:00
Luke Parker
28dbef8a1c Update to the latest patch-polkadot-sdk
Removes several dependencies.
2025-09-05 06:57:30 -04:00
Luke Parker
2216ade8c4 Tweak how prime-field normalizes to the even square root 2025-09-04 20:48:15 -04:00
Luke Parker
3541197aa5 Merge branch 'next' into next-polkadot-sdk 2025-09-03 16:44:26 -04:00
Luke Parker
5265cc69de hex-literal 1 2025-09-03 13:56:48 -04:00
Luke Parker
a141deaf36 Smash the singular Ciphersuite trait into multiple
This helps identify where the various functionalities are used, or rather, not
used. The `Ciphersuite` trait present in `patches/ciphersuite`, facilitating
the entire FCMP++ tree, only requires the markers _and_ canonical point
decoding. I've opened a PR to upstream such a trait into `group`
(https://github.com/zkcrypto/group/pull/68).

`WrappedGroup` is still justified for as long as `Group::generator` exists.
Moving `::generator()` to its own trait, on an independent structure (upstream)
would be massively appreciated. @tarcieri also wanted to update from
`fn generator()` to `const GENERATOR`, which would encourage further discussion
on https://github.com/zkcrypto/group/issues/32 and
https://github.com/zkcrypto/group/issues/45, which have been stagnant.

The `Id` trait is occasionally used yet really should be first off the chopping
block.

Finally, `WithPreferredHash` is only actually used around a third of the time,
which more than justifies it being a separate trait.

---

Updates `dalek_ff_group::Scalar` to directly re-export
`curve25519_dalek::Scalar`, as without issue. `dalek_ff_group::RistrettoPoint`
also could be replaced with an export of `curve25519_dalek::RistrettoPoint`,
yet the coordinator relies on how we implemented `Hash` on it for the hell of
it so it isn't worth it at this time. `dalek_ff_group::EdwardsPoint` can't be
replaced for an re-export of `curve25519_dalek::SubgroupPoint` as it doesn't
implement `zeroize`, `subtle` traits within a released, non-yanked version.
Relevance to https://github.com/serai-dex/serai/issues/201 and
https://github.com/dalek-cryptography/curve25519-dalek/issues/811#issuecomment-3247732746.

Also updates the `Ristretto` ciphersuite to prefer `Blake2b-512` over
`SHA2-512`. In order to maintain compliance with FROST's IETF standard,
`modular-frost` defines its own ciphersuite for Ristretto which still uses
`SHA2-512`.
2025-09-03 13:50:20 -04:00
Luke Parker
215e41fdb6 Remove deprecated APIs from dalek-ff-group
For backwards compatibility, we now use as a patch (as prior done with
`ciphersuite`).

Removes `crypto-bigint 0.5` from the tree and shapes up what the next release
will look like.
2025-09-03 07:05:50 -04:00
Luke Parker
41c34d7f11 Remove crypto-bigint from the public API of prime-field 2025-09-03 07:05:45 -04:00
Luke Parker
974bc82387 Remove unnecessary to_string for clone 2025-09-03 06:11:32 -04:00
Luke Parker
47ef24a7cc Remove unused patch for parking_lot_core 2025-09-03 06:11:32 -04:00
Luke Parker
a2209dd6ff Misc clippy fixes 2025-09-03 06:10:54 -04:00
Luke Parker
2032cf355f Expose coins::Pallet::transfer_internal as transfer_fn
It is safe to call and assumes no preconditions.
2025-09-03 00:48:17 -04:00
Luke Parker
fe41b09fd4 Properly handle the error in validator-sets 2025-09-02 11:07:45 -04:00
Luke Parker
74bad049a7 Add abstraction for the embedded elliptic curve keys
It's minimal but still pleasant.
2025-09-02 10:42:06 -04:00
Luke Parker
72fefb3d85 Strongly type EmbeddedEllipticCurveKeys
Adds a signed variant to validate knowledge and ownership.

Add SCALE derivations for `EmbeddedEllipticCurveKeys`
2025-09-02 10:42:02 -04:00
Luke Parker
200c1530a4 WIP changes to validator-sets
Actually use the added `Allocations` abstraction

Start using the sessions API in the validator-sets pallet

Get a `substrate/validator-sets` approximate to compiling
2025-09-02 10:41:58 -04:00
Luke Parker
5736b87b57 Remove final references to scale in coordinator/processor
Slight tweaks to processor
2025-09-02 10:41:55 -04:00
Luke Parker
ada94e8c5d Get all processors to compile again
Requires splitting `serai-cosign` into `serai-cosign` and `serai-cosign-types`
so the processor don't require `serai-client/serai` (not correct yet).
2025-09-02 02:17:10 -04:00
Luke Parker
75240ed327 Update serai-message-queue to the new serai-primitives 2025-09-02 02:17:10 -04:00
Luke Parker
6177cf5c07 Have serai-runtime compile again 2025-09-02 02:17:10 -04:00
Luke Parker
0d38dc96b6 Use serai-primitives, not serai-client, when possible in coordinator/*
Also updates `serai-coordinator-tributary` to prefer `borsh` to SCALE.
2025-09-02 02:17:10 -04:00
Luke Parker
e8094523ff Use borsh instead of SCALE within tendermint-machine, tributary-sdk
Not only does this follow our general practice, the latest SCALE has a
possibly-lossy truncation in its current implementation for `enum`s I'd like to
avoid without simply silencing.
2025-09-02 02:17:09 -04:00
Luke Parker
53a64bc7e2 Update serai-abi, and dependencies, to patch-polkadot-sdk 2025-09-02 02:17:09 -04:00
Luke Parker
c0e48867e1 Merge branch 'develop' into next 2025-09-01 21:22:28 -04:00
Luke Parker
0066b94d38 Update substrate-wasm-builder from serai/polkadot-sdk to serai/patch-polkadot-sdk
Steps towards allowing us to delete the `serai/polkadot-sdk` repository.
2025-09-01 21:07:11 -04:00
Luke Parker
7d54c02ec6 Update to latest nightly
Replaces #671 due to a lint being triggered.
2025-09-01 16:48:34 -04:00
Mohan
568324f631 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:58:59 -04:00
Mohan
2a02a8dc59 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:57:19 -04:00
Luke Parker
eaa9a0e5a6 Pin actions in the pages workflow 2025-09-01 15:55:17 -04:00
Luke Parker
251996c1b0 Use solc 0.8.26
`next` already does, and it's annoying to have to consistently switch between
the two branches.
2025-09-01 15:55:06 -04:00
Luke Parker
98b9cc82a7 Fix Some(_) which should be Ok(_) 2025-09-01 15:42:47 -04:00
Luke Parker
3c6e889732 Update Cargo.lock after rebase 2025-08-30 19:36:46 -04:00
Luke Parker
354efc0192 Add deallocate function to validator-sets session abstraction 2025-08-30 18:34:20 -04:00
Luke Parker
e20058feae Add a Sessions abstraction for validator-sets storage 2025-08-30 18:34:20 -04:00
Luke Parker
09f0714894 Add a dedicated Allocations struct for managing validator set allocations
Part of the DB abstraction necessary for this spaghetti.
2025-08-30 18:34:15 -04:00
Luke Parker
d3d539553c Restore the coins pallet to the runtime 2025-08-30 18:32:26 -04:00
Luke Parker
b08ae8e6a7 Add a non-canonical SCALE derivations feature
Enables representing IUMT within `StorageValues`. Applied to a variety of
values.

Fixes a bug where `Some([0; 32])` would be considered a valid block anchor.
2025-08-30 18:32:21 -04:00
Luke Parker
35db2924b4 Populate UnbalancedMerkleTrees in headers 2025-08-30 18:32:20 -04:00
Luke Parker
bfff823bf7 Add an UnbalancedMerkleTree primitive
The reasoning for it is documented with itself. The plan is to use it within
our header for committing to the DAG (allowing one header per epoch, yet
logarithmic proofs for any header within the epoch), the transactions
commitment (allowing logarithmic proofs of a transaction within a block,
without padding), and the events commitment (allowing logarithmic proofs of
unique events within a block, despite events not having a unique ID inherent).

This also defines transaction hashes and performs the necessary modifications
for transactions to be unique.
2025-08-30 18:32:16 -04:00
Luke Parker
352af85498 Use borsh entirely in create_db 2025-08-30 18:32:07 -04:00
Luke Parker
ecad89b269 Remove now-consolidated primitives crates 2025-08-30 18:32:06 -04:00
Luke Parker
48f5ed71d7 Skeleton ruintime with new types 2025-08-30 18:30:38 -04:00
Luke Parker
ed9cbdd8e0 Have apply return Ok even if calls failed
This ensures fees are paid, and block building isn't interrupted, even for TXs
which error.
2025-08-30 18:27:23 -04:00
Luke Parker
0ac11defcc Serialize BoundedVec not with a u32 length, but the minimum-viable uN where N%8==0
This does break borsh's definition of a Vec EXCEPT if the BoundedVec is
considered an enum. For sufficiently low bounds, this is viable, though it
requires automated code generation to be sane.
2025-08-30 18:27:23 -04:00
Luke Parker
24e89316d5 Correct distinction/flow of check/validate/apply 2025-08-30 18:27:23 -04:00
Luke Parker
3f03dac050 Make transaction an enum of Unsigned, Signed 2025-08-30 18:27:23 -04:00
Luke Parker
820b710928 Remove RuntimeCall from Transaction
I believe this was originally here as we needed to return a reference, not an
owned instance, so this caching enabled returning a reference? Regardless, it
isn't valuable now.
2025-08-30 18:27:23 -04:00
Luke Parker
88c7ae3e7d Add traits necessary for serai_abi::Transaction to be usable in-runtime 2025-08-30 18:27:22 -04:00
Luke Parker
dd5e43760d Add the UNIX timestamp (in milliseconds to the block
This is read from the BABE pre-digest when converting from a SubstrateHeader.
This causes the genesis block to have time 0 and all blocks produced with BABE
to have a time of the slot time. While the slot time is in 6-second intervals
(due to our target block time), defining in milliseconds preserves the ABI for
long-term goals (sub-second blocks).

Usage of the slot time deduplicates this field with BABE, and leaves the only
possible manipulation to propose during a slot or to not propose during a slot.

The actual reason this was implemented this way is because the Header trait is
overly restrictive and doesn't allow definition with new fields. Even if we
wanted to express the timestamp within the SubstrateHeader, we can't without
replacing Header::new and making a variety of changes to the polkadot-sdk
accordingly. Those aren't worth it at this moment compared to the solution
implemented.
2025-08-30 18:27:09 -04:00
Luke Parker
776e417fd2 Redo primitives, abi
Consolidates all primitives into a single crate. We didn't benefit from its
fragmentation. I'm hesitant to say the new internal-organization is better (it
may be just as clunky), but it's at least in a single crate (not spread out
over micro-crates).

The ABI is the most distinct. We now entirely own it. Block header hashes don't
directly commit to any BABE data (avoiding potentially ~4 KB headers upon
session changes), and are hashed as borsh (a more widely used codec than
SCALE). There are still Substrate variants, using SCALE and with the BABE data,
but they're prunable from a protocol design perspective.

Defines a transaction as a Vec of Calls, allowing atomic operations.
2025-08-30 18:26:37 -04:00
Luke Parker
2f8ce15a92 Update deny, rust-src component 2025-08-30 18:25:02 -04:00
Luke Parker
af56304676 Update the git tags
Does no actual migration work. This allows establishing the difference in
dependencies between substrate and polkadot-sdk/substrate.
2025-08-30 18:23:49 -04:00
Luke Parker
62a2c4f20e Update nightly version 2025-08-30 18:22:48 -04:00
Luke Parker
c69841710a Remove unnecessary to_string for clone 2025-08-30 18:08:08 -04:00
Luke Parker
3158590675 Remove unused patch for parking_lot_core 2025-08-30 16:20:29 -04:00
Luke Parker
263d75d380 Pin actions in the pages workflow 2025-08-30 15:48:56 -04:00
Luke Parker
030185c7fc Pin the nightly version used within the no-std tests 2025-08-30 15:40:36 -04:00
Luke Parker
e2dc5db7aa Various feature tweaks and updates 2025-08-29 06:42:37 -04:00
Luke Parker
90bc364f9f Replace Ciphersuite::hash_to_F
The prior-present `Ciphersuite::hash_to_F` was a sin. Implementations took a
DST, yet were not require to securely handle it. It was also biased towards the
requirements of `modular-frost` as `ciphersuite` was originally written all
those years ago, when `modular-frost` had needs exceeding what `ff`, `group`
satisfied.

Now, the hash is bound to produce an output which can be converted to a scalar
with `ff::FromUniformBytes`. A new `hash_to_F`, which accepts a single argument
of the value to hash (removing the potential to insecurely handle the DST by
removing the DST entirely). Due to `digest` yielding a `GenericArray`, yet
`FromUniformBytes` taking a `const usize`, the `ciphersuite` crate now defines
a `FromUniformBytes` trait taking an array (then implemented for all satisfiers
of `ff::FromUniformBytes`). In order to get the array type from the
`GenericArray`, the output of the hash, `digest` is updated to the `0.11`
release candidate which moves to `flexible-array` which solves that problem.

The existing, specific `hash_to_F` functions have been moved to `modular-frost`
as necessary.

`flexible-array` itself is patched to a fork due to
https://github.com/RustCrypto/hybrid-array/issues/131.
2025-08-29 05:21:43 -04:00
Luke Parker
a4811c9a41 Tag dalek-ff-group 0.4.6 2025-08-29 02:07:29 -04:00
Luke Parker
12cfa6b2a5 Differentiate no-std from alloc within tests/no-std
Fixes `no-std` builds for packages which intended to be `no-std` (without
`alloc`).

Updates a variety of MSRVs to 1.73 due to `flexible-transcript` no longer using
`std-shims` to achieve 1.66 (as `std-shims` requires `alloc`). A future
improvement would be for `std-shims` to have an `alloc` feature and only
provide MSRV shims without it.
2025-08-29 01:23:18 -04:00
Luke Parker
0c71b6fc4d Fix 32-bit, no-std builds of crypto limbs 2025-08-29 01:04:40 -04:00
Luke Parker
ffe1b60a11 Move the contents of the evrf/ folder to the crypto/ folder
It was justified when it had several libraries, which it no longer does thanks
to the upstreaming with monero-oxide.
2025-08-29 00:25:09 -04:00
Luke Parker
5526b8d439 Use SEC1 for the encoding of secq256k1 points, like secp256k1 does 2025-08-28 23:52:05 -04:00
Luke Parker
beac35c119 Introduce the complete point addition formulas to short-weierstrass 2025-08-28 23:16:16 -04:00
Luke Parker
62bb75e09a Move secq256k1 to short-weierstrass 2025-08-28 23:07:22 -04:00
Luke Parker
45bd376c08 Fix handling of prime/composite-order curves within short-weierstrass 2025-08-28 22:31:33 -04:00
Luke Parker
da190759a9 Move embedwards25519 over to short-weierstrass 2025-08-28 22:08:17 -04:00
Luke Parker
f2d399ba1e Add crate for working with short Weierstrass elliptic curves 2025-08-28 08:20:31 -04:00
Luke Parker
220bcbc592 Add prime-field crate
prime-field introduces a macro to generate a prime field, in its entitrety,
de-duplicating code across minimal-ed448, embedwards25519, and secq256k1.
2025-08-28 03:36:15 -04:00
Luke Parker
85949f4b04 Update from kayabaNerve/monero-oxide to monero-oxide/monero-oxide 2025-08-28 01:09:18 -04:00
Luke Parker
f8adfb56ad Remove unwrap within debug assertion 2025-08-26 23:15:58 -04:00
Luke Parker
2f833dec77 Add job to competently check MSRVs
The prior workflow (now deleted) required manually specifying the packages to
check and only checked the package could compile under the stated MSRV. It
didn't verify it was actually the _minimum_ supported Rust version. The new
version finds the MSRV from scratch to check if the stated MSRV aligns.

Updates stated MSRVs accordingly.

Also removes many explicit dependencies from secq256k1 for their re-exports via
k256. Not directly relevant, just part of tidying up all the `toml`s.
2025-08-26 14:13:00 -04:00
Luke Parker
e3e41324c9 Update licenses 2025-08-25 10:06:35 -04:00
Luke Parker
6ed7c5d65e Update dependencies always built with optimizations 2025-08-25 09:50:13 -04:00
Luke Parker
9dddfd91c8 Fix clippy, update old dependencies 2025-08-25 09:17:29 -04:00
Luke Parker
c24b694fb2 Correct secq256k1/embedwards25519 Zeroize implementations 2025-08-25 05:06:56 -04:00
Luke Parker
738babf7e9 dkg-evrf crate
monero-oxide relies on ciphersuite, which is in-tree, yet we've made breaking
changes since. This commit adds a patch so
monero-oxide -> patches/ciphersuite -> crypto/ciphersuite, with
patches/ciphersuite resolving the breaking changes.
2025-08-25 04:49:54 -04:00
Luke Parker
33faa53b56 Remove dleq, dkg-promote, dkg-pedpop per #597
Does not move them to a new repository at this time.
2025-08-24 21:40:18 -04:00
Luke Parker
8c366107ae Merge branch 'develop' into next
This resolves the conflicts and gets the workspace `Cargo.toml`s to not be
invalid. It doesn't actually get clippy to pass again yet.

Does move `crypto/dkg/src/evrf` into a new `crypto/dkg/evrf` crate (which does
not yet compile).
2025-08-23 15:05:13 -04:00
Luke Parker
7a790f3a20 ff/alloc when ciphersuite/alloc 2025-08-23 11:00:05 -04:00
Luke Parker
a7c77f8b5f repr(transparent) on dalek_ff_group::FieldElement 2025-08-23 05:17:43 -04:00
Luke Parker
da3095ed15 Remove FieldElement::from_square
The new `FieldElement::from_u256` is sufficient to load an unreduced value. The
caller can perform the square themselves, without us explicitly supporting this
special case.

Updates the monero-oxide version used to one which no longer uses
`FieldElement::from_square` (as their use is why it was added).
2025-08-22 18:42:43 -04:00
Luke Parker
758d422595 Have <ed448::Point as Zeroize>::zeroize yield a well-defined value 2025-08-20 08:14:00 -04:00
Luke Parker
9841061b49 Add missing feature in substrate/client 2025-08-20 06:38:25 -04:00
Luke Parker
4122a0135f Fix dirty Cargo.lock 2025-08-20 05:20:47 -04:00
Luke Parker
b63ef32864 Smash Ciphersuite definitions into their own crates
Uses dalek-ff-group for Ed25519 and Ristretto. Uses minimal-ed448 for Ed448.
Adds ciphersuite-kp256 for Secp256k1 and P-256.
2025-08-20 05:12:36 -04:00
Luke Parker
8be03a8fc2 Fix dirty lockfile 2025-08-20 01:15:56 -04:00
Luke Parker
677a2e5749 Fix zeroization timeline in multiexp, cargo machete 2025-08-20 00:35:56 -04:00
Luke Parker
38bda1d586 dalek_ff_group::FieldElement: FromUniformBytes<64> 2025-08-20 00:23:39 -04:00
Luke Parker
2bc2ca6906 Implement FromUniformBytes<64> for dalek_ff_group::Scalar 2025-08-20 00:06:07 -04:00
Luke Parker
900a6612d7 Use std-shims to reduce flexible-transcript MSRV to 1.66
flexible-transcript already had a shim to support <1.66. This was irrelevant
since flexible-transcript had a MSRV of 1.73. Due to how clunky it was, it has
been removed despite theoretically enabling an even lower MSRV.
2025-08-19 23:43:26 -04:00
Luke Parker
17c1d5cd6b Tweak multiexp to Zeroize points when invoked in constant time, not just scalars 2025-08-19 22:28:59 -04:00
Luke Parker
8a1b56a928 Make the transcript dependency optional for schnorr-signatures
It's only required when aggregating.
2025-08-19 21:50:58 -04:00
Luke Parker
75964cf6da Place Schnorr signature aggregation behind a feature flag 2025-08-19 21:45:59 -04:00
Luke Parker
d407e35cee Fix Ciphersuite feature flagging 2025-08-19 21:42:25 -04:00
Luke Parker
c8ef044acb Version bump std-shims 2025-08-19 21:01:14 -04:00
Luke Parker
ddbc32de4d Update ciphersuite/dkg MSRVs 2025-08-19 18:20:19 -04:00
Luke Parker
e5ccfac19e Replace bespoke LazyLock/OnceLock with spin re-exports
Presumably notably slower on platforms with std, yet only when compiled with old
versions of Rust for which the option is this or no support anyways.
2025-08-19 18:10:33 -04:00
Luke Parker
432daae1d1 Polyfill extension traits for div_ceil and io::Error::other 2025-08-19 18:04:29 -04:00
Luke Parker
da3a85efe5 Only drop OnceLock value if initialized 2025-08-19 17:50:04 -04:00
Luke Parker
1e0240123d Shim LazyLock when before 1.70 2025-08-19 17:40:19 -04:00
Luke Parker
f6d4d1b084 Remove unused import, fix dirty Cargo.lock 2025-08-19 16:24:19 -04:00
Luke Parker
1b37dd2951 Shim std::sync::LazyLock for Rust < 1.80
Allows downgrading some crypto crates' MSRV to 1.79 as well.
2025-08-19 16:15:44 -04:00
Luke Parker
f32e0609f1 Add warning to dalek-ff-group 2025-08-19 15:25:40 -04:00
Luke Parker
ca85f9ba0c Remove the poorly-designed reduce_512 API
Unused and unpublished. This was only added in the FCMP++ branch as a quick fix
for performance reasons. Finding a better API is still a tricky question, but
this API is _bad_.
2025-08-19 15:24:49 -04:00
Luke Parker
cfd1cb3a37 Add FieldElement::wide_reduce to dalek-ff-group 2025-08-19 13:48:54 -04:00
Luke Parker
f2c13a0040 Expose Once within std-shims, bump spin to 0.9
This is technically a semver break due to bumping spin to 0.10, with the types
from spin being directly exposed. Long-term, we should not directly expose spin
but instead have our own types which are thin wrappers around spin (clearly
defining our API and allowing upgrading internals without breaking semver).
2025-08-19 13:36:01 -04:00
Luke Parker
961f46bc04 Add const fn to create a dalek-ff-group FieldElement 2025-08-19 13:17:39 -04:00
Luke Parker
2c4de3bab4 Bump version of ff-group-tests 2025-08-19 12:51:16 -04:00
Luke Parker
95c30720d2 Update how x coordinates are handled in bitcoin-serai 2025-08-18 14:52:29 -04:00
Luke Parker
ceede14f5c Fix misc compilation errors 2025-08-18 14:52:29 -04:00
Luke Parker
5e60ea9718 Don't offset nonces yet negate to achieve an even Y coordinate
Replaces an iterative loop with an immediate result, if action is necessary.
2025-08-18 14:52:29 -04:00
Luke Parker
153f6f2f2f Update to a monero-oxide patched to dkg 0.6 2025-08-18 14:52:29 -04:00
Luke Parker
104c0d4492 Rename ThresholdKeys::secret_share to ThresholdKeys::original_secret_share 2025-08-18 14:52:29 -04:00
Luke Parker
7c8f13ab28 Raise flexible-transcript requirement as required 2025-08-18 14:52:29 -04:00
Luke Parker
cb0deadf9a Version bump flexible-transcript 2025-08-18 14:52:29 -04:00
Luke Parker
cb489f9cef Other version bumps 2025-08-18 14:52:29 -04:00
Luke Parker
cc662cb591 Version bumps, add necessary version specifications 2025-08-18 14:52:29 -04:00
Luke Parker
a8b8844e3f Fix MSRV for simple-request 2025-08-18 14:52:29 -04:00
Luke Parker
82b543ef75 Fix clippy lint for ed448 on optional compilation path 2025-08-18 14:52:29 -04:00
Luke Parker
72e80c1a3d Update everything which uses dkg to the new APIs 2025-08-18 14:52:29 -04:00
Luke Parker
b6edc94bcd Add dealer key generation crate 2025-08-18 14:52:29 -04:00
Luke Parker
cfce2b26e2 Update READMEs, targeting an 80-character line limit 2025-08-18 14:52:29 -04:00
Luke Parker
e87bbcda64 Have modular-frost compile again 2025-08-18 14:52:29 -04:00
Luke Parker
9f84adf8b3 Smash dkg into dkg, dkg-[recovery, promote, musig, pedpop]
promote and pedpop require dleq, which don't support no-std. All three should
be moved outside the Serai repository, per #597, as none are planned for use
and worth covering under our BBP.
2025-08-18 14:52:29 -04:00
Luke Parker
3919cf55ae Extend modular-frost to test with scaled and offset keys
The transcript transcripted the group key _plus_ the offset, when it should've
only transcripted the group key as the declared group key already had the
offset applied. This has been fixed.
2025-08-18 14:52:29 -04:00
Luke Parker
38dd8cb191 Support taking arbitrary linear combinations of signing keys, not just additive offsets 2025-08-18 14:52:29 -04:00
Luke Parker
f2563d39cb Correct crypto MSRVs 2025-08-18 14:52:29 -04:00
Luke Parker
15a9cbef40 git checkout -f next ./crypto
Proceeds to remove the eVRF DKG after, only keeping what's relevant to this
branch alone.
2025-08-18 14:52:29 -04:00
Luke Parker
078d6e51e5 Re-install python3 after removal to solve unmet dependencies 2025-08-15 16:17:31 -04:00
Luke Parker
6c33e18745 Explicitly install python3 to fix build-dependencies 2025-08-15 16:14:10 -04:00
Luke Parker
b743c9a43e Update Rust version
This causes the Serai node to compile and run again.
2025-08-15 15:26:16 -04:00
Luke Parker
0c2f2979a9 Remove monero-serai, migrating to monero-oxide 2025-08-15 11:45:20 -04:00
Luke Parker
971951a1a6 Add overflow-checks even on release, per good practice 2025-08-15 10:56:28 -04:00
Luke Parker
92d9e908cb Version bumps for packages that needed to be published for monero-oxide 2025-08-15 10:56:10 -04:00
Luke Parker
a32b97be88 Move to wasm32v1-none from wasm32-unknown-unknown
Works towards fixing how the Substrate node Docker image no longer works.
2025-08-15 10:55:05 -04:00
Luke Parker
e3809b2ff1 Remove unnecessary edits to Docker config in an attempt to fix the CI 2025-08-12 01:27:28 -04:00
Luke Parker
fd2d8b4f0a Use Rust 1.89 when installing bins via cargo, version pin svm-rs
svm-rs just released a new version requiring 1.89 to compile. This process to
not install _any_ software with 1.85 to minimize how many toolchains we have in
use.
2025-08-12 01:27:28 -04:00
Luke Parker
bc81614894 Attempt Docker 24 again 2025-08-12 01:27:28 -04:00
Luke Parker
8df5aa2e2d Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-12 01:27:28 -04:00
Luke Parker
b000740470 Docker 25 since 24 doesn't have an active tag anymore 2025-08-12 01:27:28 -04:00
Luke Parker
b9f554111d Attempt to use Docker 24
Long-shot premised on an old forum post on how downgrading to Docker 24 solved
their instance of the error we face, though our conditions for it are
presumably different.
2025-08-12 01:27:28 -04:00
Luke Parker
354c408e3e Stop using an older version of Docker 2025-08-12 01:27:28 -04:00
Luke Parker
df3b60376a Restore Debian 12 Bookworm over Debian 11 Bullseye 2025-08-12 01:27:28 -04:00
Luke Parker
8d209c652e Add missing "-4" arguments to wget 2025-08-12 01:27:28 -04:00
Luke Parker
9ddad794b4 Use wget -4 for the same reason as the prior commit 2025-08-12 01:27:28 -04:00
Luke Parker
b934e484cc Replace busybox wget with wget on alpine to attempt to resolve DNS issues
See https://github.com/alpinelinux/docker-alpine/issues/155.
2025-08-12 01:27:28 -04:00
Luke Parker
f8aee9b3c8 Add overflow-checks = true recommandation to monero-serai 2025-08-12 01:27:28 -04:00
Luke Parker
f51d77d26a Fix tweaked Substrate connection code in serai-client tests 2025-08-12 01:27:28 -04:00
Luke Parker
0780deb643 Use three separate commands within the Bitcoin Dockerfile to download the release
Attempts to debug which is failing, as right now, the command as a whole is within the CI.
2025-08-12 01:27:28 -04:00
Luke Parker
75c38560f4 Bookworm -> Bullseye, except for the runtime 2025-08-12 01:27:28 -04:00
Luke Parker
9f1c5268a5 Attempt downgrading Docker from 27 to 26 2025-08-12 01:27:28 -04:00
Luke Parker
35b113768b Attempt downgrading docker from .28 to .27 2025-08-12 01:27:28 -04:00
Luke Parker
f2595c4939 Tweak how subtrate-client tests waits to connect to the Monero node 2025-08-12 01:27:28 -04:00
Luke Parker
8fcfa6d3d5 Add dedicated error for when amounts aren't representable within a u64
Fixes the issue where _inputs_ could still overflow u64::MAX and cause a panic.
2025-08-12 01:27:28 -04:00
Luke Parker
54c9d19726 Have docker install set host 2025-08-12 01:27:28 -04:00
Luke Parker
25324c3cd5 Add uidmap dependency for rootless Docker 2025-08-12 01:27:28 -04:00
Luke Parker
ecb7df85b0 if: runner.os == 'Linux', with single quotes 2025-08-12 01:27:28 -04:00
Luke Parker
68c7acdbef Attempt using rootless Docker in CI via the setup-docker-action
Restores using ubuntu-latest.

Basically, at some point in the last year the existing Docker e2e tests started
failing. I'm unclear if this is an issue with the OS, the docker packages, or
what. This just tries to find a solution.
2025-08-12 01:27:28 -04:00
Luke Parker
8b60feed92 Normalize FROM AS casing in Dockerfiles 2025-08-12 01:27:28 -04:00
Luke Parker
5c895efcd0 Downgrade tests requiring Docker from Ubuntu latest to Ubuntu 22.04
Attempts to resolve containers immediately exiting for some specific test runs.
2025-08-12 01:27:28 -04:00
Luke Parker
60e55656aa deny --hide-inclusion-graph 2025-08-12 01:27:28 -04:00
Luke Parker
9536282418 Update which deb archive to use within the runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8297d0679d Update substrate to one with a properly defined panic handler as of modern Rust 2025-08-12 01:27:28 -04:00
Luke Parker
d9f854b08a Attempt to fix install of clang within runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8aaf7f7dc6 Remove (presumably) unnecessary command to explicitly install python 2025-08-12 01:27:28 -04:00
Luke Parker
ce447558ac Update Rust versions used in orchestration 2025-08-12 01:27:28 -04:00
Luke Parker
fc850da30e Missing --allow-remove-essential flag 2025-08-12 01:27:28 -04:00
Luke Parker
d6f6cf1965 Attempt to force remove shim-signed to resolve 'unmet dependencies' issues with shim-signed 2025-08-12 01:27:28 -04:00
Luke Parker
4438b51881 Expand python packages explicitly installed 2025-08-12 01:27:28 -04:00
Luke Parker
6ae0d9fad7 Install cargo deny with Rust 1.85 and pin its version 2025-08-12 01:27:28 -04:00
Luke Parker
ad08b410a8 Pin cargo-machete to 0.8.0 to prevent other unexpected CI failures 2025-08-12 01:27:28 -04:00
Luke Parker
ec3cfd3ab7 Explicitly install python3 after removing various unnecessary packages 2025-08-12 01:27:28 -04:00
Luke Parker
01eb2daa0b Updated dated version of actions/cache 2025-08-12 01:27:28 -04:00
Luke Parker
885000f970 Add update, upgrade, fix-missing call to Ubuntu build dependencies
Attempts to fix a CI failure for some misconfiguration...
2025-08-12 01:27:28 -04:00
Luke Parker
4be506414b Install cargo machete with Rust 1.85
cargo machete now uses Rust's 2024 edition, and 1.85 was the first to ship it.
2025-08-12 01:27:28 -04:00
Luke Parker
1143d84e1d Remove msbuild from packages to remove when the CI starts
Apparently, it's no longer installed by default.
2025-08-12 01:27:28 -04:00
Luke Parker
336922101f Further harden decoy selection
It risked panicking if a non-monotonic distribution was returned. While the
provided RPC code won't return non-monotonic distributions, users are allowed
to define their own implementations and override the provided method. Said
implementations could omit this required check.
2025-08-12 01:27:28 -04:00
Luke Parker
ffa033d978 Clarify transcripting for Clsag::verify, Mlsag::verify, as with Clsag::sign 2025-08-12 01:27:28 -04:00
Luke Parker
23f986f57a Tweak the Substrate runtime as required by the Rust version bump performed 2025-08-12 01:27:28 -04:00
Luke Parker
bb726b58af Fix #654 2025-08-12 01:27:28 -04:00
Luke Parker
387615705c Fix #643 2025-08-12 01:27:28 -04:00
Luke Parker
c7f825a192 Rename Bulletproof::calculate_bp_clawback to Bulletproof::calculate_clawback 2025-08-12 01:27:28 -04:00
Luke Parker
d363b1c173 Fix #630 2025-08-12 01:27:28 -04:00
Luke Parker
d5077ae966 Respond to 13.1.1.
Uses Zeroizing for username/password in monero-simple-request-rpc.
2025-08-12 01:27:28 -04:00
Luke Parker
188fcc3cb4 Remove potentially-failing unchecked arithmetic operations for ones which error
In response to 9.13.3.

Requires a bump to Rust 1.82 to take advantage of `Option::is_none_or`.
2025-08-12 01:27:28 -04:00
Luke Parker
cbab9486c6 Clarify messages in non-debug assertions 2025-08-12 01:27:28 -04:00
Luke Parker
a5f4c450c6 Response to usage of unwrap in non-test code
This commit replaces all usage of `unwrap` with `expect` within
`networks/monero`, clarifying why the panic risked is unreachable. This commit
also replaces some uses of `unwrap` with solutions which are guaranteed not to
fail.

Notably, compilation on 128-bit systems is prevented, ensuring
`u64::try_from(usize::MAX)` will never panic at runtime.

Slight breaking changes are additionally included as necessary to massage out
some avoidable panics.
2025-08-12 01:27:28 -04:00
Luke Parker
4f65a0b147 Remove Clone from ClsagMultisigMask{Sender, Receiver}
This had ill-defined properties on Clone, as a mask could be sent multiple times
(unintended) and multiple algorithms may receive the same mask from a singular
sender.

Requires removing the Clone bound within modular-frost and expanding the test
helpers accordingly.

This was not raised in the audit yet upon independent review.
2025-08-12 01:27:28 -04:00
Luke Parker
feb18d64a7 Respond to 2 3
We now use `FrostError::InternalError` instead of a panic to represent the mask
not being set.
2025-08-12 01:27:28 -04:00
Luke Parker
cb1e6535cb Respond to 2 2 2025-08-12 01:27:28 -04:00
Luke Parker
6b8cf6653a Respond to 1.1 A2 (also cited as 2 1)
`read_vec` was unbounded. It now accepts an optional bound. In some places, we
are able to define and provide a bound (Bulletproofs(+)' `L` and `R` vectors).
In others, we cannot (the amount of inputs within a transaction, which is not
subject to any rule in the current consensus other than the total transaction
size limit). Usage of `None` in those locations preserves the existing
behavior.
2025-08-12 01:27:28 -04:00
Luke Parker
b426bfcfe8 Respond to 1.1 A1 2025-08-12 01:27:28 -04:00
Luke Parker
21ce50ecf7 Revert "Forward docker stderr to stdout in case stderr is being dropped for some reason"
This was intended for the monero-audit branch.
2025-08-10 20:53:09 -04:00
Luke Parker
a4ceb2e756 Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-10 20:50:12 -04:00
Luke Parker
b59b1f59dd Remove ToB report PDF by request 2025-07-18 03:19:10 -04:00
Luke Parker
cc4a65e82a Add Trail of Bits audit of our Ethereum code 2025-07-12 03:29:56 -04:00
Luke Parker
eab5d9e64f Remove Mastodon link from README
Closes #662.
2025-07-12 03:29:21 -04:00
Luke Parker
4e0c58464f Update Router documentarion after following B2 (B1 redux) 2025-04-12 10:04:10 -04:00
Luke Parker
205da3fd38 Update the Ethereum processor to the Router messages including their on-chain address
This only updates the syntax. It does not yet actually route the address as
necessary.
2025-04-12 09:57:29 -04:00
Luke Parker
f7e63d4944 Have Router signatures additionally sign the Router's address (B2)
This slightly modifies the gas usage of the contract in a way breaking the
existing vector. A new, much simpler, vector has been provided instead.
2025-04-12 09:55:40 -04:00
Luke Parker
b5608fc3d2 Update dated documentation for verifySignature (B1) 2025-04-12 08:42:45 -04:00
Luke Parker
33018bf6da Explicitly ban the identity point as an Ethereum Schnorr public key (002)
This doesn't have a well-defined affine representation. k256's behavior,
mapping it to (0, 0), means this would've been rejected anyways (so this isn't
a change of any current behavior), but it's best not to rely on such an
implementation detail.
2025-04-12 08:38:06 -04:00
Luke Parker
bef90b2f1a Fix gas estimation discrepancy when gas isn't monotonic 2025-04-12 08:32:11 -04:00
Luke Parker
184c02714a alloy-core 1.0, alloy 0.14, revm 0.22 (001)
This moves to Rust 1.86 as were prior on Rust 1.81, and the new alloy
dependencies require 1.82.

The revm API changes were notable for us. Instead of relying on a modified call
instruction (with deep introspection into the EVM design), we now use the more
recent and now more prominent Inspector API. This:

1) Lets us perform far less introspection
2) Forces us to rewrite the gas estimation code we just had audited

Thankfully, it itself should be much easier to read/review, and our existing
test suite has extensively validated it.

This resolves 001 which was a concern for if/when this upgrade occurs. By doing
it now, with a dedicated test case ensuring the issue we would have had with
alloy-core 0.8 and `validate=false` isn't actively an issue, we resolve it.
2025-04-12 08:09:09 -04:00
Luke Parker
5a7b815e2e Update nightly version 2025-02-04 07:57:04 -05:00
Luke Parker
22e411981a Resolve clippy errors from recent merges 2025-01-30 05:04:28 -05:00
akildemir
11d48d0685 add Serai JSON-RPC methods (#627)
* add serai rpc methods

* fix machete & dex quote price api

* fix validators api

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2025-01-30 04:23:03 -05:00
akildemir
e4cc23b72d add economic security pallet tests (#623) 2025-01-30 04:19:12 -05:00
akildemir
52d853c8ba add validator sets pallet tests (#614)
* add validator sets pallet tests

* update tests with new types

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2025-01-30 04:16:19 -05:00
akildemir
9c33a711d7 add in instructions pallet tests (#608)
* add pallet tests

* set mock runtime AllowMint to correct type
2025-01-30 04:13:21 -05:00
Luke Parker
a275023cfc Finish merging in the develop branch 2025-01-30 03:14:24 -05:00
Luke Parker
258c02ff39 Merge branch 'develop' into next
This is an initial resolution of conflicts which does not work.
2025-01-30 00:56:29 -05:00
Luke Parker
3655dc723f Use clearer identity check in equality 2025-01-30 00:13:55 -05:00
Luke Parker
315d4fb356 Correct decoding identity for embedwards25519/secq256k1 2025-01-29 23:01:45 -05:00
Luke Parker
2bc880e372 Downstream the eVRF libraries from FCMP++
Also adds no-std support to secq256k1 and embedwards25519.
2025-01-29 22:29:40 -05:00
Luke Parker
19422de231 Ensure a non-zero fee in the Router OutInstruction gas fuzz test 2025-01-27 15:39:55 -05:00
Luke Parker
fa0dadc9bd Rename Deployer bytecode to initcode 2025-01-27 15:39:06 -05:00
Luke Parker
f004c8726f Remove unused library bytecode from ethereum-schnorr-contract 2025-01-27 15:38:44 -05:00
Luke Parker
835b5bb06f Split tests across a few files, fuzz generate OutInstructions
Tests successful gas estimation even with more complex behaviors.
2025-01-27 13:59:11 -05:00
Luke Parker
0484113254 Fix the ability for a malicious adversary to snipe ERC20s out via re-entrancy from the ERC20 contract 2025-01-27 13:07:35 -05:00
Luke Parker
17cc10b3f7 Test Execute result decoding, reentrancy 2025-01-27 13:01:52 -05:00
Luke Parker
7e01589fba Erc20::approve for DestinationType::Contract
This allows the CREATE code to bork without the Serai router losing access to
the coins in question. It does incur overhead on the deployed contract, which
now no longer just has to query its balance but also has to call the
transferFrom, but its a safer pattern and not a UX detriment.

This also improves documentation.
2025-01-27 11:58:39 -05:00
Luke Parker
f8c3acae7b Check the Router-deployed contracts' code 2025-01-27 07:48:37 -05:00
Luke Parker
0957460f27 Add supporting security commentary to Router.sol 2025-01-27 07:36:23 -05:00
Luke Parker
ea00ba9ff8 Clarified usage of CREATE
CREATE was originally intended for gas savings. While one sketch did move to
CREATE2, the security concerns around address collisions (requiring all init
codes not be malleable to achieve security) continue to justify this.

To resolve the gas estimation concerns raised in the prior commit, the
createAddress function has been made constant-gas.
2025-01-27 07:36:13 -05:00
Luke Parker
a9625364df Test createAddress
Benchmarks gas usage

Note the estimator needs to be updated as this is now variable-gas to the
state.
2025-01-27 05:37:56 -05:00
Luke Parker
75c6427d7c CREATE uses RLP, not ABI-encoding 2025-01-27 04:24:25 -05:00
Luke Parker
e742a6b0ec Test ERC20 OutInstructions 2025-01-27 02:08:01 -05:00
Luke Parker
5164a710a2 Redo gas estimation via revm
Adds a minimal amount of packages. Does add decent complexity. Avoids having
constants which aren't exact, due to things like the quadratic memory cost, and
the issues with such estimates accordingly.
2025-01-26 22:42:50 -05:00
Luke Parker
27c1dc4646 Test ETH address/code OutInstructions 2025-01-24 18:46:17 -05:00
Luke Parker
3892fa30b7 Test an empty execute 2025-01-24 17:13:36 -05:00
Luke Parker
ed599c8ab5 Have the Batch event encode the amount of results
Necessary to distinguish a bitvec with 1 results from a bitvec with 7 results.
2025-01-24 17:04:25 -05:00
Luke Parker
29bb5e21ab Take advantage of RangeInclusive for specifying filters' blocks 2025-01-24 07:44:47 -05:00
Luke Parker
604a4b2442 Add execute_tx to fill in missing test cases reliant on it 2025-01-24 07:33:36 -05:00
Luke Parker
977dcad86d Test the Router rejects invalid signatures 2025-01-24 07:22:43 -05:00
Luke Parker
cefc542744 Test SeraiKeyWasNone 2025-01-24 06:58:54 -05:00
Luke Parker
164fe9a14f Test Router's InvalidSeraiKey error 2025-01-24 06:41:24 -05:00
Luke Parker
f948881eba Simplify async code in in_instructions_unordered
Outsources fetching the ERC20 events to top_level_transfers_unordered.
2025-01-24 05:43:04 -05:00
Luke Parker
201b675031 Test ERC20 InInstructions 2025-01-24 03:45:04 -05:00
Luke Parker
3d44766eff Add ERC20 InInstruction test 2025-01-24 03:23:58 -05:00
Luke Parker
a63a86ba79 Test Ether InInstructions 2025-01-23 09:30:54 -05:00
Luke Parker
e922264ebf Add selector collisions to the IERC20 lib 2025-01-23 08:25:59 -05:00
Luke Parker
7e53eff642 Fix the async flow with the Router
It had sequential async calls with complexity O(n), with a variety of redundant
calls. There was also a constant of... 4? 5? for each item. Now, the total
sequence depth is just 3-4.
2025-01-23 06:16:58 -05:00
Luke Parker
669b8b776b Work on testing the Router
Completes the `Executed` enum in the router. Adds an `Escape` struct. Both are
needed for testing purposes.

Documents the gas constants in intent and reasoning.

Adds modernized tests around key rotation and the escape hatch.

Also updates the rest of the codebase which had accumulated errors.
2025-01-23 02:06:06 -05:00
Luke Parker
6508957cbc Make a proper nonReentrant modifier
A transaction couldn't call execute twice within a single TX prior. Now, it
can.

Also adds a bit more context to the escape hatch events/errors.
2025-01-23 00:04:44 -05:00
Luke Parker
373e794d2c Check the escaped to address has code set
Document choice not to use a confirmation flow there as well.
2025-01-22 22:45:51 -05:00
Luke Parker
c8f3a32fdf Replace custom read/write impls in router with borsh 2025-01-21 03:49:29 -05:00
Luke Parker
f690bf831f Remove old code still marked TODO 2025-01-19 02:36:34 -05:00
Luke Parker
0b30ac175e Restore workspace-wide clippy
Fixes accumulated errors in the Substrate code. Modifies the runtime build to
work with a modern clippy. Removes e2e tests from the workspace.
2025-01-19 02:27:35 -05:00
Luke Parker
47560fa9a9 Test manually implemented serializations in the Router lib 2025-01-19 00:45:26 -05:00
Luke Parker
9d57c4eb4d Downscope dependencies in serai-processor-ethereum-primitives, const-hex decode bytecode in ethereum-schnorr-contract 2025-01-19 00:16:50 -05:00
Luke Parker
642ba00952 Update Deployer README, 80-character line length 2025-01-19 00:03:56 -05:00
Luke Parker
3c9c12d320 Test the Deployer contract 2025-01-18 23:58:38 -05:00
Luke Parker
f6b52b3fd3 Maximum line length of 80 in Deployer.sol 2025-01-18 15:22:58 -05:00
Luke Parker
0d906363a0 Simplify and test deterministically_sign 2025-01-18 15:13:39 -05:00
Luke Parker
8222ce78d8 Correct accumulated errors in the processor 2025-01-18 12:41:57 -05:00
Luke Parker
cb906242e7 2025 nightly
Supersedes #640.
2025-01-18 12:41:25 -05:00
Luke Parker
2a19e9da93 Update to libp2p 0.54
This is the same libp2p Substrate uses as of
https://github.com/paritytech/polkadot-sdk/pull/6248.
2025-01-17 04:50:15 -05:00
Luke Parker
2226dd59cc Comment all dependencies in substrate/node
Causes the Cargo.lock to no longer include the substrate dependencies
(including its copy of libp2p).
2025-01-17 04:09:27 -05:00
Luke Parker
be2098d2e1 Remove Serai from the ConfirmDkgTask 2025-01-15 21:00:50 -05:00
Luke Parker
6b41f32371 Correct handling of InvalidNonce within the coordinator 2025-01-15 20:48:54 -05:00
Luke Parker
19b87c7f5a Add the DKG confirmation flow
Finishes the coordinator redo
2025-01-15 20:29:57 -05:00
Luke Parker
505f1b20a4 Correct re-attempts for the DKG Confirmation protocol
Also spawns the SetKeys task.
2025-01-15 17:49:41 -05:00
Luke Parker
8b52b921f3 Have the Tributary scanner yield DKG confirmation signing protocol data 2025-01-15 15:16:30 -05:00
Luke Parker
f36bbcba25 Flatten the map of preprocesses/shares, send Participant index with DkgParticipation 2025-01-15 14:24:51 -05:00
Luke Parker
167826aa88 Implement SeraiAddress <-> Participant mapping and add RemoveParticipant transactions 2025-01-15 12:51:35 -05:00
Luke Parker
bea4f92b7a Fix parity-db builds for the Coordinator 2025-01-15 12:10:11 -05:00
Luke Parker
7312fa8d3c Spawn PublishSlashReportTask
Updates it so that it'll try for every network instead of returning after any
network fails.

Uses the SlashReport type throughout the codebase.
2025-01-15 12:08:28 -05:00
Luke Parker
92a4cceeeb Spawn PublishBatchTask
Also removes the expectation Batches published via it are sent in an ordered
fashion. That won't be true if the signing protocols complete out-of-order (as
possible when we are signing them in parallel).
2025-01-15 11:21:55 -05:00
Luke Parker
3357181fe2 Handle sign::ProcessorMessage::[Preprocesses, Shares] 2025-01-15 10:47:47 -05:00
Luke Parker
7ce5bdad44 Don't add transactions for topics which have yet to be recognized 2025-01-15 07:01:24 -05:00
Luke Parker
0de3fda921 Further space out requests for cosigns from the network 2025-01-15 05:59:56 -05:00
Luke Parker
cb410cc4e0 Correct how we handle rounding errors within the penalty fn
We explicitly no longer slash stakes but we still set the maximum slash to the
allocated stake + the rewards. Now, the reward slash is bound to the rewards
and the stake slash is bound to the stake. This prevents an improperly rounded
reward slash from effecting a stake slash.
2025-01-15 02:46:31 -05:00
Luke Parker
6c145a5ec3 Disable offline, disruptive slashes
Reasoning commented in codebase
2025-01-14 11:44:13 -05:00
Luke Parker
a7fef2ba7a Redesign Slash/SlashReport types with a function to calculate the penalty 2025-01-14 07:51:39 -05:00
Luke Parker
291ebf5e24 Have serai-task warnings print with the name of the task 2025-01-14 02:52:26 -05:00
Luke Parker
5e0e91c85d Add tasks to publish data onto Serai 2025-01-14 01:58:26 -05:00
Luke Parker
b5a6b0693e Add a proper error type to ContinuallyRan
This isn't necessary. Because we just log the error, we never match off of it,
we don't need any structure beyond String (or now Debug, which still gives us
a way to print the error). This is for the ergonomics of not having to
constantly write `.map_err(|e| format!("{e:?}"))`.
2025-01-12 18:29:08 -05:00
Luke Parker
3cc2abfedc Add a task to publish slash reports 2025-01-12 17:47:48 -05:00
Luke Parker
0ce9aad9b2 Add flow to add transactions onto Tributaries 2025-01-12 07:32:45 -05:00
Luke Parker
e35aa04afb Start handling messages from the processor
Does route ProcessorMessage::CosignedBlock. Rest are stubbed with TODO.
2025-01-12 06:07:55 -05:00
Luke Parker
e7de5125a2 Have processor-messages use CosignIntent/SignedCosign, not the historic cosign format
Has yet to update the processor accordingly.
2025-01-12 05:52:33 -05:00
Luke Parker
158140c3a7 Add a proper error for intake_cosign 2025-01-12 05:49:17 -05:00
Luke Parker
df9a9adaa8 Remove direct dependencies of void, async-trait 2025-01-12 03:48:43 -05:00
Luke Parker
d854807edd Make message_queue::client::Client::send fallible
Allows tasks to report the errors themselves and handle retry in our
standardized way.
2025-01-11 21:57:58 -05:00
Luke Parker
f501d46d44 Correct disabling of Nagle's algorithm 2025-01-11 06:54:43 -05:00
Luke Parker
74106b025f Publish SlashReport onto the Tributary 2025-01-11 06:51:55 -05:00
Luke Parker
e731b546ab Update documentation 2025-01-11 05:13:43 -05:00
Luke Parker
77d60660d2 Move spawn_cosign from main.rs into tributary.rs
Also refines the tasks within tributary.rs a good bit.
2025-01-11 05:12:56 -05:00
Luke Parker
3c664ff05f Re-arrange coordinator/
coordinator/tributary was tributary-chain. This crate has been renamed
tributary-sdk and moved to coordinator/tributary-sdk.

coordinator/src/tributary was our instantion of a Tributary, the Transaction
type and scan task. This has been moved to coordinator/tributary.

The main reason for this was due to coordinator/main.rs becoming untidy. There
is now a collection of clean, independent APIs present in the codebase.
coordinator/main.rs is to compose them. Sometimes, these compositions are a bit
silly (reading from a channel just to forward the message to a distinct
channel). That's more than fine as the code is still readable and the value
from the cleanliness of the APIs composed far exceeds the nits from having
these odd compositions.

This breaks down a bit as we now define a global database, and have some APIs
interact with multiple other APIs.

coordinator/src/tributary was a self-contained, clean API. The recently added
task present in coordinator/tributary/mod.rs, which bound it to the rest of the
Coordinator, wasn't.

Now, coordinator/src is solely the API compositions, and all self-contained
APIs are their own crates.
2025-01-11 04:14:21 -05:00
Luke Parker
c05b0c9eba Handle Canonical, NewSet from serai-coordinator-substrate 2025-01-11 03:07:15 -05:00
Luke Parker
6d5049cab2 Move the task providing transactions onto the Tributary to the Tributary module
Slims down the main file a bit
2025-01-11 02:13:23 -05:00
Luke Parker
1419ba570a Route from tributary scanner to message-queue 2025-01-11 01:55:36 -05:00
Luke Parker
542bf2170a Provide Cosign/CosignIntent for Tributaries 2025-01-11 01:31:28 -05:00
Luke Parker
378d6b90cf Delete old Tributaries on reboot 2025-01-10 20:10:05 -05:00
Luke Parker
cbe83956aa Flesh out Coordinator main
Lot of TODOs as the APIs are all being routed together.
2025-01-10 02:24:24 -05:00
Luke Parker
091d485fd8 Have the Tributary scanner DB be distinct from the cosign DB
Allows deleting the entire Tributary scanner DB upon retiry.
2025-01-10 02:22:58 -05:00
Luke Parker
2a3eaf4d7e Wrap the entire Libp2p object in an Arc
Makes `Clone` calls significantly cheaper as now only the outer Arc is cloned
(the inner ones have been removed). Also wraps uses of Serai in an Arc as we
shouldn't actually need/want multiple caller connection pools.
2025-01-10 01:26:07 -05:00
Luke Parker
23122712cb Document validator jailing upon participation failures and slash report determination
These are TODOs. I just wanted to ensure this was written down and each seemed
too small for GH issues.
2025-01-09 19:50:39 -05:00
Luke Parker
47eb793ce9 Slash upon Tendermint evidence
Decoding slash evidence requires specifying the instantiated generic
`TendermintNetwork`. While irrelevant, that generic includes a type satisfying
`tributary::P2p`. It was only possible to route now that we've redone the P2P
API.
2025-01-09 06:58:00 -05:00
Luke Parker
9b0b5fd1e2 Have serai-cosign index finalized blocks' numbers 2025-01-09 06:57:26 -05:00
Luke Parker
893a24a1cc Better document bounds in serai-coordinator-p2p 2025-01-09 06:57:12 -05:00
Luke Parker
b101e2211a Complete serai-coordinator-p2p 2025-01-09 06:23:14 -05:00
Luke Parker
e9c1235b76 Tweak how features are activated in the coins pallet tests 2024-10-30 17:15:39 -04:00
akildemir
dc1b8dfccd add coins pallet tests (#606)
* add tests

* remove unused crate

* remove serai_abi
2024-10-30 16:05:56 -04:00
Luke Parker
d0201cf2e5 Remove potentially vartime (due to cache side-channel attacks) table access in dalek-ff-group and minimal-ed448 2024-10-27 08:51:19 -04:00
Luke Parker
f3d20e60b3 Remove --no-deps from docs build to fix linking to deps 2024-10-17 21:14:13 -04:00
Luke Parker
dafba81b40 Add wasm32-unknown-unknown target to docs build 2024-10-17 18:45:34 -04:00
Luke Parker
91f8ec53d9 Add build-dependencies into docs build 2024-10-17 18:29:47 -04:00
Luke Parker
fc9a4a08b8 Correct rust-docs component name 2024-10-17 18:12:35 -04:00
Luke Parker
45fadb21ac Correct paths in pages.yml 2024-10-17 18:05:54 -04:00
Luke Parker
28619fbee1 CI fixes
Mainly corrects for https://github.com/alloy-rs/alloy/issues/1510 yet also
corrects a missing machete ignore.
2024-10-17 18:02:57 -04:00
Luke Parker
bbe014c3a7 Have CI build with doc_auto_cfg 2024-10-17 17:48:14 -04:00
Luke Parker
fb3fadb3d3 Publish Rust docs to GH pages 2024-10-17 17:18:58 -04:00
Luke Parker
f481d20773 Correct licensing for .github 2024-10-17 17:17:36 -04:00
Luke Parker
599b2dec8f cargo update
Should fix the recent CI failures re: Ethereum as well.
2024-10-09 00:39:34 -04:00
akildemir
435f1d9ae1 add specific network/coin/balance types (#619)
* add specific network/coin/balance types

* misc fixes

* fix clippy

* misc fixes

* fix pr comments

* Make halting for external networks

* fix encode/decode
2024-10-06 22:16:11 -04:00
Luke Parker
d7ecab605e Update docs gems 2024-09-25 10:37:29 -04:00
Jeffro
805fea52ec Add link for SCALE encoding in doc 2024-09-24 14:17:28 -07:00
j-berman
48db06f901 xmr: fix scan long encrypted amount 2024-09-21 08:33:35 -07:00
Luke Parker
e9d0a5e0ed Remove stray references to monero-wallet-util 2024-09-20 04:28:23 -04:00
Luke Parker
44d05518aa Add a public TransactionKeys struct to monero-wallet
monero-wallet ships an Eventuality, yet it's across the entire transaction. It
can't prove a single output's state with a traditional payment proof. By adding
this new object, another library can obtain the ephemeral randomness used and
do any/every proof they want regarding a transaction's outputs.

Necessary for https://github.com/serai-dex/serai/issues/599.
2024-09-20 04:26:21 -04:00
Luke Parker
23b433fe6c Fix #612 2024-09-20 04:05:17 -04:00
Luke Parker
2e57168a97 Update documentation on Timelocked 2024-09-20 04:01:55 -04:00
Luke Parker
5c6160c398 Kick monero-seed, polyseed, monero-wallet-util to https://github.com/kayabaNerve/monero-wallet-util 2024-09-20 03:24:33 -04:00
Luke Parker
9eee1d971e bitcoin-serai changes from next
Expands the NotEnoughFunds error and enables fetching the entire unsigned
transaction, not just the outputs it'll have.
2024-09-20 03:14:20 -04:00
Luke Parker
e6300847d6 monero-serai changes from 2edc2f3612 2024-09-20 02:42:46 -04:00
Luke Parker
e0a3e7bea6 Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-20 02:40:18 -04:00
Luke Parker
cbebaa1349 Tighten documentation on Block::number 2024-09-20 02:40:01 -04:00
1006 changed files with 36791 additions and 101825 deletions

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: "27.0"
default: "30.0"
runs:
using: "composite"
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -7,13 +7,20 @@ runs:
- name: Remove unused packages
shell: bash
run: |
sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
# Ensure the repositories are synced
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean
docker system prune -a --volumes
# Reinstall python3 as a general dependency of a functional operating system
sudo apt install -y python3 --fix-missing
if: runner.os == 'Linux'
- name: Remove unused packages
@@ -31,19 +38,45 @@ runs:
shell: bash
run: |
if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler
sudo apt install -y ca-certificates protobuf-compiler libclang-dev
elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf
brew install protobuf llvm
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi
- name: Install solc
shell: bash
run: |
cargo install svm-rs
svm install 0.8.26
svm use 0.8.26
cargo +1.91.1 install svm-rs --version =0.5.22
svm install 0.8.29
svm use 0.8.29
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43
- name: Remove preinstalled Docker
shell: bash
run: |
docker system prune -a --volumes
sudo apt remove -y *docker*
# Install uidmap which will be required for the explicitly installed Docker
sudo apt install -y uidmap
if: runner.os == 'Linux'
- name: Update system dependencies
shell: bash
run: |
sudo apt update -y
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt clean
if: runner.os == 'Linux'
- name: Install rootless Docker
uses: docker/setup-docker-action@e61617a16c407a86262fb923c35a616ddbe070b3 # 4.6.0
with:
rootless: true
set-host: true
if: runner.os == 'Linux'

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.4
default: v0.18.4.4
runs:
using: "composite"
steps:
- name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,39 +5,46 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.4
default: v0.18.4.4
runs:
using: "composite"
steps:
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Monero Daemon
if: steps.cache-monerod.outputs.cache-hit != 'true'
# Calculates OS/ARCH to demonstrate it, yet then locks to linux-x64 due
# to the contained folder not following the same naming scheme and
# requiring further expansion not worth doing right now
shell: bash
run: |
RUNNER_OS=${{ runner.os }}
RUNNER_ARCH=${{ runner.arch }}
OS=${{ runner.os }}
ARCH=${{ runner.arch }}
RUNNER_OS=${RUNNER_OS,,}
RUNNER_ARCH=${RUNNER_ARCH,,}
OS=$(echo "$OS" | tr "[:upper:]" "[:lower:]")
ARCH=$(echo "$ARCH" | tr "[:upper:]" "[:lower:]")
RUNNER_OS=linux
RUNNER_ARCH=x64
if [ "$OS" = "windows" ]; then
OS=win
echo "Windows is unsupported at this time"
exit 1
fi
if [ "$OS" = "macos" ]; then
OS=mac
fi
if [ "$ARCH" = "arm64" ]; then
ARCH=armv8
fi
FILE=monero-$RUNNER_OS-$RUNNER_ARCH-${{ inputs.version }}.tar.bz2
FILE=monero-$OS-$ARCH-${{ inputs.version }}.tar.bz2
wget https://downloads.getmonero.org/cli/$FILE
tar -xvf $FILE
rm $FILE
sudo mv monero-x86_64-linux-gnu-${{ inputs.version }}/monerod /usr/bin/monerod
sudo mv $(find . -name monerod) /usr/bin/monerod
sudo chmod 777 /usr/bin/monerod
sudo chmod +x /usr/bin/monerod

View File

@@ -5,12 +5,12 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.3.4
default: v0.18.4.4
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: "27.1"
default: "30.0"
runs:
using: "composite"
@@ -19,9 +19,9 @@ runs:
uses: ./.github/actions/build-dependencies
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with:
version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9
version: v1.5.0
cache: false
- name: Run a Monero Regtest Node

View File

@@ -1 +1 @@
nightly-2024-07-01
nightly-2025-12-01

View File

@@ -17,7 +17,7 @@ jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -31,7 +31,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -19,7 +19,7 @@ jobs:
test-crypto:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
@@ -32,13 +32,17 @@ jobs:
-p dalek-ff-group \
-p minimal-ed448 \
-p ciphersuite \
-p ciphersuite-kp256 \
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p generalized-bulletproofs \
-p generalized-bulletproofs-circuit-abstraction \
-p ec-divisors \
-p generalized-bulletproofs-ec-gadgets \
-p prime-field \
-p short-weierstrass \
-p secq256k1 \
-p embedwards25519 \
-p dkg \
-p dkg-recovery \
-p dkg-dealer \
-p dkg-musig \
-p dkg-evrf \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -9,16 +9,10 @@ jobs:
name: Run cargo deny
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install cargo deny
run: cargo install --locked cargo-deny
run: cargo +1.91.1 install cargo-deny --version =0.18.9
- name: Run cargo deny
run: cargo deny -L error --all-features check
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -13,7 +13,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -11,11 +11,11 @@ jobs:
clippy:
strategy:
matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest]
os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use
id: nightly
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -43,24 +43,18 @@ jobs:
deny:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install cargo deny
run: cargo install --locked cargo-deny
run: cargo +1.91.1 install cargo-deny --version =0.18.9
- name: Run cargo deny
run: cargo deny -L error --all-features check
run: cargo deny -L error --all-features check --hide-inclusion-graph
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use
id: nightly
@@ -73,37 +67,132 @@ jobs:
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with:
version: nightly-41d4e5437107f6f42c7711123890147bc736a609
version: v1.5.0
cache: false
- name: Run forge fmt
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol")
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -name "*.sol")
machete:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify all dependencies are in use
run: |
cargo install cargo-machete
cargo machete
cargo +1.91.1 install cargo-machete --version =0.9.1
cargo +1.91.1 machete
msrv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify claimed `rust-version`
shell: bash
run: |
cargo +1.91.1 install cargo-msrv --version =0.18.4
function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the
# directory called from.
return_to=$(pwd)
echo "Checking $1"
cd $1
# We then find the existing `rust-version` using `grep` (for the right line) and then a
# regex (to strip to just the major and minor version).
existing=$(cat ./Cargo.toml | grep "rust-version" | grep -Eo "[0-9]+\.[0-9]+")
# We then backup the `Cargo.toml`, allowing us to restore it after, saving time on future
# MSRV checks (as they'll benefit from immediately exiting if the queried version is less
# than the declared MSRV).
mv ./Cargo.toml ./Cargo.toml.bak
# We then use an inverted (`-v`) grep to remove the existing `rust-version` from the
# `Cargo.toml`, as required because else earlier versions of Rust won't even attempt to
# compile this crate.
cat ./Cargo.toml.bak | grep -v "rust-version" > Cargo.toml
# We then find the actual `rust-version` using `cargo-msrv` (again stripping to just the
# major and minor version).
actual=$(cargo msrv find --output-format minimal | grep -Eo "^[0-9]+\.[0-9]+")
# Finally, we compare the two.
echo "Declared rust-version: $existing"
echo "Actual rust-version: $actual"
[ $existing == $actual ]
result=$?
# Restore the original `Cargo.toml`.
rm Cargo.toml
mv ./Cargo.toml.bak ./Cargo.toml
# Return to the directory called from and return the result.
cd $return_to
return $result
}
# Check each member of the workspace
function check_workspace {
# Get the members array from the workspace's `Cargo.toml`
cargo_toml_lines=$(cat ./Cargo.toml | wc -l)
# Keep all lines after the start of the array, then keep all lines before the next "]"
members=$(cat Cargo.toml | grep "members\ \=\ \[" -m1 -A$cargo_toml_lines | grep "]" -m1 -B$cargo_toml_lines)
# Parse out any comments, whitespace, including comments post-fixed on the same line as an entry
# We accomplish the latter by pruning all characters after the entry's ","
members=$(echo "$members" | grep -Ev "^[[:space:]]*(#|$)" | awk -F',' '{print $1","}')
# Replace the first line, which was "members = [" and is now "members = [,", with "["
members=$(echo "$members" | sed "1s/.*/\[/")
# Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline
members=$(echo "$members" | grep -v "networks/ethereum/relayer\"")
members=$(echo "$members" | grep -v "message-queue\"")
members=$(echo "$members" | grep -v "processor/bin\"")
members=$(echo "$members" | grep -v "processor/bitcoin\"")
members=$(echo "$members" | grep -v "processor/ethereum\"")
members=$(echo "$members" | grep -v "processor/monero\"")
members=$(echo "$members" | grep -v "coordinator\"")
members=$(echo "$members" | grep -v "substrate/runtime\"")
members=$(echo "$members" | grep -v "substrate/node\"")
members=$(echo "$members" | grep -v "orchestration\"")
# Don't check the tests
members=$(echo "$members" | grep -v "mini\"")
members=$(echo "$members" | grep -v "tests/")
# Remove the trailing comma by replacing the last line's "," with ""
members=$(echo "$members" | sed "$(($(echo "$members" | wc -l) - 1))s/\,//")
echo $members | jq -r ".[]" | while read -r member; do
check_msrv $member
correct=$?
if [ $correct -ne 0 ]; then
return $correct
fi
done
}
check_workspace
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither
run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer==0.11.3
python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol
@@ -112,3 +201,14 @@ jobs:
cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/
cd processor/ethereum/router/contracts
slither Router.sol
shellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: shellcheck
run: |
sudo apt install -y shellcheck
find . -name "*.sh" | while read -r script; do
shellcheck --enable=all --shell=sh --severity=info $script
done

View File

@@ -27,7 +27,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -17,7 +17,7 @@ jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -1,77 +0,0 @@
name: Monero Tests
on:
push:
branches:
- develop
paths:
- "networks/monero/**"
- "processor/**"
pull_request:
paths:
- "networks/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-io --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-generators --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-primitives --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-mlsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-clsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-borromean --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-bulletproofs --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-address --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-seed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package polyseed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --lib
# Doesn't run unit tests with features as the tests workflow will
integration-tests:
runs-on: ubuntu-latest
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.3.4]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.3.4' }}
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --all-features --test '*'

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
with:
submodules: "recursive"

View File

@@ -1,258 +0,0 @@
name: Weekly MSRV Check
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
jobs:
msrv-common:
name: Run cargo msrv on common
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on common
run: |
cargo msrv verify --manifest-path common/zalloc/Cargo.toml
cargo msrv verify --manifest-path common/std-shims/Cargo.toml
cargo msrv verify --manifest-path common/env/Cargo.toml
cargo msrv verify --manifest-path common/db/Cargo.toml
cargo msrv verify --manifest-path common/task/Cargo.toml
cargo msrv verify --manifest-path common/request/Cargo.toml
cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml
msrv-crypto:
name: Run cargo msrv on crypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on crypto
run: |
cargo msrv verify --manifest-path crypto/transcript/Cargo.toml
cargo msrv verify --manifest-path crypto/ff-group-tests/Cargo.toml
cargo msrv verify --manifest-path crypto/dalek-ff-group/Cargo.toml
cargo msrv verify --manifest-path crypto/ed448/Cargo.toml
cargo msrv verify --manifest-path crypto/multiexp/Cargo.toml
cargo msrv verify --manifest-path crypto/dleq/Cargo.toml
cargo msrv verify --manifest-path crypto/ciphersuite/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorr/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/generalized-bulletproofs/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/circuit-abstraction/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/divisors/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/ec-gadgets/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/embedwards25519/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/secq256k1/Cargo.toml
cargo msrv verify --manifest-path crypto/dkg/Cargo.toml
cargo msrv verify --manifest-path crypto/frost/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorrkel/Cargo.toml
msrv-networks:
name: Run cargo msrv on networks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on networks
run: |
cargo msrv verify --manifest-path networks/bitcoin/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/build-contracts/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/schnorr/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/alloy-simple-request-transport/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/relayer/Cargo.toml --features parity-db
cargo msrv verify --manifest-path networks/monero/io/Cargo.toml
cargo msrv verify --manifest-path networks/monero/generators/Cargo.toml
cargo msrv verify --manifest-path networks/monero/primitives/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/mlsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/clsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/borromean/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/bulletproofs/Cargo.toml
cargo msrv verify --manifest-path networks/monero/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/simple-request/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/address/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/Cargo.toml
cargo msrv verify --manifest-path networks/monero/verify-chain/Cargo.toml
msrv-message-queue:
name: Run cargo msrv on message-queue
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path message-queue/Cargo.toml --features parity-db
msrv-processor:
name: Run cargo msrv on processor
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on processor
run: |
cargo msrv verify --manifest-path processor/view-keys/Cargo.toml
cargo msrv verify --manifest-path processor/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/messages/Cargo.toml
cargo msrv verify --manifest-path processor/scanner/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/smart-contract/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/standard/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/transaction-chaining/Cargo.toml
cargo msrv verify --manifest-path processor/key-gen/Cargo.toml
cargo msrv verify --manifest-path processor/frost-attempt-manager/Cargo.toml
cargo msrv verify --manifest-path processor/signers/Cargo.toml
cargo msrv verify --manifest-path processor/bin/Cargo.toml --features parity-db
cargo msrv verify --manifest-path processor/bitcoin/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/test-primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/erc20/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/deployer/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/router/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/Cargo.toml
cargo msrv verify --manifest-path processor/monero/Cargo.toml
msrv-coordinator:
name: Run cargo msrv on coordinator
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on coordinator
run: |
cargo msrv verify --manifest-path coordinator/tributary/tendermint/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml
cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml
cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/libp2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/Cargo.toml
msrv-substrate:
name: Run cargo msrv on substrate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on substrate
run: |
cargo msrv verify --manifest-path substrate/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/dex/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/economic-security/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/abi/Cargo.toml
cargo msrv verify --manifest-path substrate/client/Cargo.toml
cargo msrv verify --manifest-path substrate/runtime/Cargo.toml
cargo msrv verify --manifest-path substrate/node/Cargo.toml
msrv-orchestration:
name: Run cargo msrv on orchestration
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path orchestration/Cargo.toml
msrv-mini:
name: Run cargo msrv on mini
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on mini
run: |
cargo msrv verify --manifest-path mini/Cargo.toml

View File

@@ -21,7 +21,7 @@ jobs:
test-networks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
@@ -34,19 +34,3 @@ jobs:
-p ethereum-schnorr-contract \
-p alloy-simple-request-transport \
-p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \
-p monero-primitives \
-p monero-mlsag \
-p monero-clsag \
-p monero-borromean \
-p monero-bulletproofs \
-p monero-serai \
-p monero-rpc \
-p monero-simple-request-rpc \
-p monero-address \
-p monero-wallet \
-p monero-seed \
-p polyseed \
-p monero-wallet-util \
-p monero-serai-verify-chain

View File

@@ -23,13 +23,23 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install RISC-V Toolchain
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf
run: |
sudo apt update
sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal --component rust-src --target riscv32imac-unknown-none-elf
- name: Verify no-std builds
run: CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf -p serai-no-std-tests
run: |
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core -p serai-no-std-tests
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core,alloc -p serai-no-std-tests --features "alloc"

View File

@@ -1,6 +1,7 @@
# MIT License
#
# Copyright (c) 2022 just-the-docs
# Copyright (c) 2022-2024 Luke Parker
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@@ -20,31 +21,21 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll site to Pages
name: Deploy Rust docs and Jekyll site to Pages
on:
push:
branches:
- "develop"
paths:
- "docs/**"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
# Only allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
@@ -53,27 +44,37 @@ jobs:
# Build job
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Setup Ruby
uses: ruby/setup-ruby@v1
uses: ruby/setup-ruby@8aeb6ff8030dd539317f8e1769a044873b56ea71 # 1.268.0
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@v3
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # 5.0.0
- name: Build with Jekyll
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
JEKYLL_ENV: production
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Buld Rust docs
run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features
mv target/doc docs/_site/rust
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # 4.0.0
with:
path: "docs/_site/"
@@ -87,4 +88,4 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # 4.0.5

View File

@@ -31,7 +31,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -27,10 +27,10 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests -- --nocapture

166
.github/workflows/stack-size.yml vendored Normal file
View File

@@ -0,0 +1,166 @@
name: Check Update Default Stack Size
on:
push:
paths:
- "orchestration/increase_default_stack_size.sh"
pull_request:
paths:
- "orchestration/increase_default_stack_size.sh"
workflow_dispatch:
# Also run weekly to ensure this doesn't inadvertently decay
schedule:
- cron: "0 0 * * 1"
jobs:
stack_size:
strategy:
matrix:
os: [ubuntu-latest, ubuntu-24.04, ubuntu-22.04, macos-15-intel, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Go
uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # 6.1.0
with:
go-version: stable
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: monerod
key: stack-size-monerod
- name: Download the Monero Daemon
if: steps.cache-monerod.outputs.cache-hit != 'true'
run: |
# We explicitly download the Linux binary as this script executes over an ELF binary
wget https://downloads.getmonero.org/cli/monero-linux-x64-v0.18.4.4.tar.bz2
tar -xvf monero-linux-x64-v0.18.4.4.tar.bz2
mv $(find . -name monerod) .
- name: Verify expected behavior
shell: bash
run: |
STACK=$((8 * 1024 * 1024))
OS=${{ runner.os }}
if [ "$OS" = "Linux" ]; then
sudo apt update -y
sudo apt install -y ksh bash dash zsh busybox posh mksh yash
sudo ln -s "$(which busybox)" /usr/bin/ash
sudo ln -s "$(which busybox)" /usr/bin/hush
wget http://ftp.us.debian.org/debian/pool/main/g/gash/gash_0.3.1-1_amd64.deb
sudo apt install ./gash_0.3.1-1_amd64.deb
SHELLS="sh ksh bash dash zsh ash hush posh mksh lksh gash yash"
fi
if [ "$OS" = "macOS" ]; then
brew install binutils # `readelf`
# `binutils` is not placed within the path, so find its
# `readelf` bin and manually move it into our path
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
sudo cp $(find "$HOMEBREW_ROOT_PATH" -name readelf) /usr/local/bin/
# macOS has the benefit of packaging `oksh`, `osh`, and having distinct core tools
# TODO: `posh` is packaged but doesn't work: https://github.com/serai-dex/serai/issues/703
brew install ksh93 bash dash-shell zsh mksh oksh yash oils-for-unix
SHELLS="sh ksh bash dash zsh mksh oksh yash osh"
# macOS also has the benefit of packaging (via MacPorts) `mrsh`,
# which explicitly attempts to be be exactly POSIX, without any extensions.
# We first have to install MacPorts, the easiest method being via source.
curl -O https://distfiles.macports.org/MacPorts/MacPorts-2.11.6.tar.bz2
tar xf MacPorts-2.11.6.tar.bz2
cd MacPorts-2.11.6
./configure
make
sudo make install
cd ..
PATH=$PATH:/opt/local/bin
sudo port -v selfupdate
# Now, we install `mrsh`
# TODO: https://github.com/serai-dex/serai/issues/704
# sudo port install mrsh
# SHELLS="$SHELLS mrsh"
fi
# Install shells available via `cargo`
cargo install brush-shell
SHELLS="$SHELLS brush"
# We would also test with `nsh` here if not for https://github.com/nuta/nsh/issues/49
# cargo install nsh
# SHELLS="$SHELLS nsh"
# Install shells available via `go`
# TODO: https://github.com/u-root/u-root/issues/3474
# GOBIN=/usr/local/bin go install github.com/u-root/u-root/cmds/core/gosh@latest
# SHELLS="$SHELLS gosh"
# Patch with `muslstack`
cp monerod monerod-muslstack
GOBIN=$(pwd) go install github.com/yaegashi/muslstack@d19cc5866abce3ca59dfc1666df7cc97097d0933
./muslstack -s "$STACK" ./monerod-muslstack
# Patch with `chelf`, which only works on a Linux host (due to requiring `elf.h`)
# TODO: Install the header on macOS so `chelf` may be used as the source of truth
if [ "$OS" = "Linux" ]; then
cp monerod monerod-chelf
git clone https://github.com/Gottox/chelf
cd chelf
git checkout b2994186cea7b7d61a588fd06c1cc1ae75bcc21a
make
./chelf -s "$STACK" ../monerod-chelf
cd ..
fi
# Run our script with all installed shells
for shell in $SHELLS; do
echo "Executing \`$shell\`"
cp monerod monerod-idss-$shell
ln -s "$(which $shell)" sh
./sh ./orchestration/increase_default_stack_size.sh monerod-idss-$shell
rm ./sh
done
# Verify they all had the same result
sha256() {
sha256sum "$1" | cut -d' ' -f1
}
CHELF=$(sha256 monerod-muslstack)
find . -name "monerod-*" | while read -r bin; do
BIN=$(sha256 "$bin")
if [ ! "$CHELF" = "$BIN" ]; then
echo "Different artifact between \`monerod-muslstack\` ($CHELF) and \`$bin\` ($BIN)"
exit 1
fi
done
# Verify the integrity of the result
read_stack() {
STACK_INFO=$(readelf "$1" -l | grep STACK -A1)
MEMSZ=$(printf "%s\n" "$STACK_INFO" | tail -n1 | sed -E s/^[[:space:]]*//g | cut -f2 -d' ')
printf "%i" $((MEMSZ))
}
INITIAL_STACK=$(read_stack monerod)
if [ "$INITIAL_STACK" -ne "0" ]; then
echo "Initial \`PT_GNU_STACK\` wasn't 0"
exit 2
fi
UPDATED_STACK=$(read_stack monerod-muslstack)
if [ "$UPDATED_STACK" -ne "$STACK" ]; then
echo "Updated \`PT_GNU_STACK\` ($UPDATED_STACK) wasn't 8 MB ($STACK)"
exit 3
fi
# Only one byte should be different due to the bit pattern of 8 MB
BYTES_DIFFERENT=$(cmp -l monerod monerod-muslstack | wc -l || true)
if [ "$BYTES_DIFFERENT" -ne 1 ]; then
echo "More than one byte was different between the two binaries"
exit 4
fi

View File

@@ -29,7 +29,7 @@ jobs:
test-infra:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
@@ -60,9 +60,11 @@ jobs:
-p serai-ethereum-processor \
-p serai-monero-processor \
-p tendermint-machine \
-p tributary-chain \
-p tributary-sdk \
-p serai-cosign-types \
-p serai-cosign \
-p serai-coordinator-substrate \
-p serai-coordinator-tributary \
-p serai-coordinator-p2p \
-p serai-coordinator-libp2p-p2p \
-p serai-coordinator \
@@ -72,7 +74,7 @@ jobs:
test-substrate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
@@ -81,31 +83,33 @@ jobs:
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \
-p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \
-p substrate-median \
-p serai-core-pallet \
-p serai-coins-pallet \
-p serai-validator-sets-pallet \
-p serai-signals-pallet \
-p serai-dex-pallet \
-p serai-genesis-liquidity-pallet \
-p serai-economic-security-pallet \
-p serai-emissions-pallet \
-p serai-in-instructions-pallet \
-p serai-runtime \
-p serai-node
-p serai-substrate-tests
test-serai-client:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-serai
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-bitcoin
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-ethereum
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-monero
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

8
.gitignore vendored
View File

@@ -1,7 +1,13 @@
target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!/Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs
.vscode

7893
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +1,6 @@
[workspace]
resolver = "2"
members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/rocksdb",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims",
"common/zalloc",
"common/patchable-async-sleep",
@@ -29,19 +15,21 @@ members = [
"crypto/dalek-ff-group",
"crypto/ed448",
"crypto/ciphersuite",
"crypto/ciphersuite/kp256",
"crypto/multiexp",
"crypto/schnorr",
"crypto/dleq",
"crypto/evrf/secq256k1",
"crypto/evrf/embedwards25519",
"crypto/evrf/generalized-bulletproofs",
"crypto/evrf/circuit-abstraction",
"crypto/evrf/divisors",
"crypto/evrf/ec-gadgets",
"crypto/prime-field",
"crypto/short-weierstrass",
"crypto/secq256k1",
"crypto/embedwards25519",
"crypto/dkg",
"crypto/dkg/recovery",
"crypto/dkg/dealer",
"crypto/dkg/musig",
"crypto/dkg/evrf",
"crypto/frost",
"crypto/schnorrkel",
@@ -52,23 +40,6 @@ members = [
"networks/ethereum/alloy-simple-request-transport",
"networks/ethereum/relayer",
"networks/monero/io",
"networks/monero/generators",
"networks/monero/primitives",
"networks/monero/ringct/mlsag",
"networks/monero/ringct/clsag",
"networks/monero/ringct/borromean",
"networks/monero/ringct/bulletproofs",
"networks/monero",
"networks/monero/rpc",
"networks/monero/rpc/simple-request",
"networks/monero/wallet/address",
"networks/monero/wallet",
"networks/monero/wallet/seed",
"networks/monero/wallet/polyseed",
"networks/monero/wallet/util",
"networks/monero/verify-chain",
"message-queue",
"processor/messages",
@@ -91,48 +62,43 @@ members = [
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/router",
"processor/ethereum/erc20",
"processor/ethereum/router",
"processor/ethereum",
"processor/monero",
"coordinator/tributary/tendermint",
"coordinator/tributary",
"coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk",
"coordinator/cosign/types",
"coordinator/cosign",
"coordinator/substrate",
"coordinator/tributary",
"coordinator/p2p",
"coordinator/p2p/libp2p",
"coordinator",
"substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi",
"substrate/median",
"substrate/core",
"substrate/coins",
"substrate/validator-sets",
"substrate/signals",
"substrate/dex",
"substrate/genesis-liquidity",
"substrate/economic-security",
"substrate/emissions",
"substrate/in-instructions",
"substrate/runtime",
"substrate/node",
"substrate/client/serai",
"substrate/client/bitcoin",
"substrate/client/ethereum",
"substrate/client/monero",
"substrate/client",
"orchestration",
@@ -143,61 +109,106 @@ members = [
"tests/docker",
"tests/message-queue",
"tests/processor",
"tests/coordinator",
"tests/full-stack",
# TODO "tests/processor",
# TODO "tests/coordinator",
"tests/substrate",
# TODO "tests/full-stack",
"tests/reproducible-runtime",
]
[profile.dev]
panic = "abort"
overflow-checks = true
[profile.release]
panic = "abort"
overflow-checks = true
# These do not respect the `panic` configuration value, so we don't provide them
[profile.test]
# panic = "abort" # https://github.com/rust-lang/issues/67650
overflow-checks = true
[profile.bench]
overflow-checks = true
[profile.dev.package]
# Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
sha3 = { opt-level = 3 }
blake2 = { opt-level = 3 }
ff = { opt-level = 3 }
group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 }
secp256k1 = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 }
monero-io = { opt-level = 3 }
monero-primitives = { opt-level = 3 }
monero-ed25519 = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs-generators = { opt-level = 3 }
monero-bulletproofs = {opt-level = 3 }
monero-oxide = { opt-level = 3 }
# Always compile the eVRF DKG tree with optimizations as well
secp256k1 = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
ec-divisors = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
dkg = { opt-level = 3 }
monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
[profile.release]
panic = "unwind"
# revm also effectively requires being built with optimizations
revm = { opt-level = 3 }
revm-bytecode = { opt-level = 3 }
revm-context = { opt-level = 3 }
revm-context-interface = { opt-level = 3 }
revm-database = { opt-level = 3 }
revm-database-interface = { opt-level = 3 }
revm-handler = { opt-level = 3 }
revm-inspector = { opt-level = 3 }
revm-interpreter = { opt-level = 3 }
revm-precompile = { opt-level = 3 }
revm-primitives = { opt-level = 3 }
revm-state = { opt-level = 3 }
[patch.crates-io]
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# Point to empty crates for crates unused within in our tree
alloy-eip2124 = { path = "patches/ethereum/alloy-eip2124" }
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
fastrlp-3 = { package = "fastrlp", path = "patches/ethereum/fastrlp-0.3" }
fastrlp-4 = { package = "fastrlp", path = "patches/ethereum/fastrlp-0.4" }
primitive-types-12 = { package = "primitive-types", path = "patches/ethereum/primitive-types-0.12" }
rlp = { path = "patches/ethereum/rlp" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-0.30" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# Dependencies from monero-oxide which originate from within our own tree, potentially shimmed to account for deviations since publishing
std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" }
flexible-transcript = { path = "crypto/transcript" }
ciphersuite = { path = "patches/ciphersuite" }
dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
matches = { path = "patches/matches" }
# Patches due to `std` now including the required functionality
is_terminal_polyfill = { path = "patches/is_terminal_polyfill" }
lazy_static = { path = "patches/lazy_static" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
@@ -207,12 +218,19 @@ matches = { path = "patches/matches" }
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# The official pasta_curves repo doesn't support Zeroize
pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" }
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
[workspace.lints.clippy]
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"
unwrap_or_default = "allow"
map_unwrap_or = "allow"
needless_continue = "allow"
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
@@ -243,7 +261,6 @@ manual_string_new = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
needless_continue = "deny"
needless_pass_by_value = "deny"
ptr_cast_constness = "deny"
range_minus_one = "deny"
@@ -252,7 +269,7 @@ redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
string_slice = "deny"
unchecked_duration_subtraction = "deny"
unchecked_time_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny"
unnecessary_join = "deny"
@@ -261,3 +278,6 @@ unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
zero_sized_map_values = "deny"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648

View File

@@ -5,4 +5,4 @@ a full copy of the AGPL-3.0 License is included in the root of this repository
as a reference text. This copy should be provided with any distribution of a
crate licensed under the AGPL-3.0, as per its terms.
The GitHub actions (`.github/actions`) are licensed under the MIT license.
The GitHub actions/workflows (`.github`) are licensed under the MIT license.

View File

@@ -59,7 +59,6 @@ issued at the discretion of the Immunefi program managers.
- [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/

View File

@@ -0,0 +1,14 @@
# Trail of Bits Ethereum Contracts Audit, June 2025
This audit included:
- Our Schnorr contract and associated library (/networks/ethereum/schnorr)
- Our Ethereum primitives library (/processor/ethereum/primitives)
- Our Deployer contract and associated library (/processor/ethereum/deployer)
- Our ERC20 library (/processor/ethereum/erc20)
- Our Router contract and associated library (/processor/ethereum/router)
It is encompassing up to commit 4e0c58464fc4673623938335f06e2e9ea96ca8dd.
Please see
https://github.com/trailofbits/publications/blob/30c4fa3ebf39ff8e4d23ba9567344ec9691697b5/reviews/2025-04-serai-dex-security-review.pdf
for the actual report.

View File

@@ -0,0 +1,50 @@
# eVRF DKG
In 2024, the [eVRF paper](https://eprint.iacr.org/2024/397) was published to
the IACR preprint server. Within it was a one-round unbiased DKG and a
one-round unbiased threshold DKG. Unfortunately, both simply describe
communication of the secret shares as 'Alice sends $s_b$ to Bob'. This causes,
in practice, the need for an additional round of communication to occur where
all participants confirm they received their secret shares.
Within Serai, it was posited to use the same premises as the DDH eVRF itself to
achieve a verifiable encryption scheme. This allows the secret shares to be
posted to any 'bulletin board' (such as a blockchain) and for all observers to
confirm:
- A participant participated
- The secret shares sent can be received by the intended recipient so long as
they can access the bulletin board
Additionally, Serai desired a robust scheme (albeit with an biased key as the
output, which is fine for our purposes). Accordingly, our implementation
instantiates the threshold eVRF DKG from the eVRF paper, with our own proposal
for verifiable encryption, with the caller allowed to decide the set of
participants. They may:
- Select everyone, collapsing to the non-threshold unbiased DKG from the eVRF
paper
- Select a pre-determined set, collapsing to the threshold unbaised DKG from
the eVRF paper
- Select a post-determined set (with any solution for the Common Subset
problem), allowing achieving a robust threshold biased DKG
Note that the eVRF paper proposes using the eVRF to sample coefficients yet
this is unnecessary when the resulting key will be biased. Any proof of
knowledge for the coefficients, as necessary for their extraction within the
security proofs, would be sufficient.
MAGIC Grants contracted HashCloak to formalize Serai's proposal for a DKG and
provide proofs for its security. This resulted in
[this paper](<./Security Proofs.pdf>).
Our implementation itself is then built on top of the audited
[`generalized-bulletproofs`](https://github.com/kayabaNerve/monero-oxide/tree/generalized-bulletproofs/audits/crypto/generalized-bulletproofs)
and
[`generalized-bulletproofs-ec-gadgets`](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/fcmps).
Note we do not use the originally premised DDH eVRF yet the one premised on
elliptic curve divisors, the methodology of which is commented on
[here](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/divisors).
Our implementation itself is unaudited at this time however.

Binary file not shown.

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.71"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true
@@ -17,8 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.23", default-features = false, features = ["zstd"], optional = true }
parity-db = { version = "0.5", default-features = false, features = ["arc"], optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features]
parity-db = ["dep:parity-db"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -15,7 +15,7 @@ pub fn serai_db_key(
///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `scale` encodable. The return type is used to auto encode and decode the database
/// they must be `borsh` serializable. The return type is used to auto (de)serialize the database
/// value bytes using `borsh`.
///
/// # Arguments
@@ -54,11 +54,10 @@ macro_rules! create_db {
)?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key(
stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(),
($($arg),*).encode()
&borsh::to_vec(&($($arg),*)).unwrap(),
)
}
pub(crate) fn set(

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/env"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.71"
rust-version = "1.64"
[package.metadata.docs.rs]
all-features = true

2
common/env/LICENSE vendored
View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2023 Luke Parker
Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
// Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> {

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-a
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021"
rust-version = "1.71"
rust-version = "1.70"
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2024 Luke Parker
Copyright (c) 2024-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]

View File

@@ -1,9 +1,9 @@
[package]
name = "simple-request"
version = "0.1.0"
version = "0.3.0"
description = "A simple HTTP(S) request library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-request"
repository = "https://github.com/serai-dex/serai/tree/develop/common/request"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021"
@@ -19,9 +19,10 @@ workspace = true
[dependencies]
tower-service = { version = "0.3", default-features = false }
hyper = { version = "1", default-features = false, features = ["http1", "client"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy", "tokio"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy"] }
http-body-util = { version = "0.1", default-features = false }
tokio = { version = "1", default-features = false }
futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] }
hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
@@ -29,6 +30,8 @@ zeroize = { version = "1", optional = true }
base64ct = { version = "1", features = ["alloc"], optional = true }
[features]
tls = ["hyper-rustls"]
tokio = ["hyper-util/tokio"]
tls = ["tokio", "hyper-rustls"]
webpki-roots = ["tls", "hyper-rustls/webpki-roots"]
basic-auth = ["zeroize", "base64ct"]
default = ["tls"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023 Luke Parker
Copyright (c) 2023-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,19 +1,20 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
use core::{pin::Pin, future::Future};
use std::sync::Arc;
use tokio::sync::Mutex;
use futures_util::FutureExt;
use ::tokio::sync::Mutex;
use tower_service::Service as TowerService;
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest, rt::Executor};
pub use hyper;
use hyper_util::client::legacy::{Client as HyperClient, connect::HttpConnector};
#[cfg(feature = "tls")]
use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector};
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest};
use hyper_util::{
rt::tokio::TokioExecutor,
client::legacy::{Client as HyperClient, connect::HttpConnector},
};
pub use hyper;
mod request;
pub use request::*;
@@ -37,52 +38,86 @@ type Connector = HttpConnector;
type Connector = HttpsConnector<HttpConnector>;
#[derive(Clone, Debug)]
enum Connection {
enum Connection<
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
ConnectionPool(HyperClient<Connector, Full<Bytes>>),
Connection {
executor: E,
connector: Connector,
host: Uri,
connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>,
},
}
/// An HTTP client.
///
/// `tls` is only guaranteed to work when using the `tokio` executor. Instantiating a client when
/// the `tls` feature is active without using the `tokio` executor will cause errors.
#[derive(Clone, Debug)]
pub struct Client {
connection: Connection,
pub struct Client<
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
connection: Connection<E>,
}
impl Client {
fn connector() -> Connector {
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Client<E>
{
#[allow(clippy::unnecessary_wraps)]
fn connector() -> Result<Connector, Error> {
let mut res = HttpConnector::new();
res.set_keepalive(Some(core::time::Duration::from_secs(60)));
res.set_nodelay(true);
res.set_reuse_address(true);
#[cfg(feature = "tls")]
if core::any::TypeId::of::<E>() !=
core::any::TypeId::of::<hyper_util::rt::tokio::TokioExecutor>()
{
Err(Error::ConnectionError(
"`tls` feature enabled but not using the `tokio` executor".into(),
))?;
}
#[cfg(feature = "tls")]
res.enforce_http(false);
#[cfg(feature = "tls")]
let res = HttpsConnectorBuilder::new()
.with_native_roots()
.expect("couldn't fetch system's SSL roots")
.https_or_http()
.enable_http1()
.wrap_connector(res);
res
let https = HttpsConnectorBuilder::new().with_native_roots();
#[cfg(all(feature = "tls", not(feature = "webpki-roots")))]
let https = https.map_err(|e| {
Error::ConnectionError(
format!("couldn't load system's SSL root certificates and webpki-roots unavilable: {e:?}")
.into(),
)
})?;
// Fallback to `webpki-roots` if present
#[cfg(all(feature = "tls", feature = "webpki-roots"))]
let https = https.unwrap_or(HttpsConnectorBuilder::new().with_webpki_roots());
#[cfg(feature = "tls")]
let res = https.https_or_http().enable_http1().wrap_connector(res);
Ok(res)
}
pub fn with_connection_pool() -> Client {
Client {
pub fn with_executor_and_connection_pool(executor: E) -> Result<Client<E>, Error> {
Ok(Client {
connection: Connection::ConnectionPool(
HyperClient::builder(TokioExecutor::new())
HyperClient::builder(executor)
.pool_idle_timeout(core::time::Duration::from_secs(60))
.build(Self::connector()),
.build(Self::connector()?),
),
}
})
}
pub fn without_connection_pool(host: &str) -> Result<Client, Error> {
pub fn with_executor_and_without_connection_pool(
executor: E,
host: &str,
) -> Result<Client<E>, Error> {
Ok(Client {
connection: Connection::Connection {
connector: Self::connector(),
executor,
connector: Self::connector()?,
host: {
let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?;
if uri.host().is_none() {
@@ -95,9 +130,9 @@ impl Client {
})
}
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_>, Error> {
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_, E>, Error> {
let request: Request = request.into();
let mut request = request.0;
let Request { mut request, response_size_limit } = request;
if let Some(header_host) = request.headers().get(hyper::header::HOST) {
match &self.connection {
Connection::ConnectionPool(_) => {}
@@ -131,7 +166,7 @@ impl Client {
Connection::ConnectionPool(client) => {
client.request(request).await.map_err(Error::HyperUtil)?
}
Connection::Connection { connector, host, connection } => {
Connection::Connection { executor, connector, host, connection } => {
let mut connection_lock = connection.lock().await;
// If there's not a connection...
@@ -143,28 +178,46 @@ impl Client {
let call_res = call_res.map_err(Error::ConnectionError);
let (requester, connection) =
hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?;
// This will die when we drop the requester, so we don't need to track an AbortHandle
// for it
tokio::spawn(connection);
// This task will die when we drop the requester
executor.execute(Box::pin(connection.map(|_| ())));
*connection_lock = Some(requester);
}
let connection = connection_lock.as_mut().unwrap();
let connection = connection_lock.as_mut().expect("lock over the connection was poisoned");
let mut err = connection.ready().await.err();
if err.is_none() {
// Send the request
let res = connection.send_request(request).await;
if let Ok(res) = res {
return Ok(Response(res, self));
let response = connection.send_request(request).await;
if let Ok(response) = response {
return Ok(Response { response, size_limit: response_size_limit, client: self });
}
err = res.err();
err = response.err();
}
// Since this connection has been put into an error state, drop it
*connection_lock = None;
Err(Error::Hyper(err.unwrap()))?
Err(Error::Hyper(err.expect("only here if `err` is some yet no error")))?
}
};
Ok(Response(response, self))
Ok(Response { response, size_limit: response_size_limit, client: self })
}
}
#[cfg(feature = "tokio")]
mod tokio {
use hyper_util::rt::tokio::TokioExecutor;
use super::*;
pub type TokioClient = Client<TokioExecutor>;
impl Client<TokioExecutor> {
pub fn with_connection_pool() -> Result<Self, Error> {
Self::with_executor_and_connection_pool(TokioExecutor::new())
}
pub fn without_connection_pool(host: &str) -> Result<Self, Error> {
Self::with_executor_and_without_connection_pool(TokioExecutor::new(), host)
}
}
}
#[cfg(feature = "tokio")]
pub use tokio::TokioClient;

View File

@@ -7,11 +7,15 @@ pub use http_body_util::Full;
use crate::Error;
#[derive(Debug)]
pub struct Request(pub(crate) hyper::Request<Full<Bytes>>);
pub struct Request {
pub(crate) request: hyper::Request<Full<Bytes>>,
pub(crate) response_size_limit: Option<usize>,
}
impl Request {
#[cfg(feature = "basic-auth")]
fn username_password_from_uri(&self) -> Result<(String, String), Error> {
if let Some(authority) = self.0.uri().authority() {
if let Some(authority) = self.request.uri().authority() {
let authority = authority.as_str();
if authority.contains('@') {
// Decode the username and password from the URI
@@ -36,9 +40,10 @@ impl Request {
let mut formatted = format!("{username}:{password}");
let mut encoded = Base64::encode_string(formatted.as_bytes());
formatted.zeroize();
self.0.headers_mut().insert(
self.request.headers_mut().insert(
hyper::header::AUTHORIZATION,
HeaderValue::from_str(&format!("Basic {encoded}")).unwrap(),
HeaderValue::from_str(&format!("Basic {encoded}"))
.expect("couldn't form header from base64-encoded string"),
);
encoded.zeroize();
}
@@ -59,9 +64,17 @@ impl Request {
pub fn with_basic_auth(&mut self) {
let _ = self.basic_auth_from_uri();
}
}
impl From<hyper::Request<Full<Bytes>>> for Request {
fn from(request: hyper::Request<Full<Bytes>>) -> Request {
Request(request)
/// Set a size limit for the response.
///
/// This may be exceeded by a single HTTP frame and accordingly isn't perfect.
pub fn set_response_size_limit(&mut self, response_size_limit: Option<usize>) {
self.response_size_limit = response_size_limit;
}
}
impl From<hyper::Request<Full<Bytes>>> for Request {
fn from(request: hyper::Request<Full<Bytes>>) -> Request {
Request { request, response_size_limit: None }
}
}

View File

@@ -1,24 +1,54 @@
use core::{pin::Pin, future::Future};
use std::io;
use hyper::{
StatusCode,
header::{HeaderValue, HeaderMap},
body::{Buf, Incoming},
body::Incoming,
rt::Executor,
};
use http_body_util::BodyExt;
use futures_util::{Stream, StreamExt};
use crate::{Client, Error};
// Borrows the client so its async task lives as long as this response exists.
#[allow(dead_code)]
#[derive(Debug)]
pub struct Response<'a>(pub(crate) hyper::Response<Incoming>, pub(crate) &'a Client);
impl<'a> Response<'a> {
pub struct Response<
'a,
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
pub(crate) response: hyper::Response<Incoming>,
pub(crate) size_limit: Option<usize>,
pub(crate) client: &'a Client<E>,
}
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Response<'_, E>
{
pub fn status(&self) -> StatusCode {
self.0.status()
self.response.status()
}
pub fn headers(&self) -> &HeaderMap<HeaderValue> {
self.0.headers()
self.response.headers()
}
pub async fn body(self) -> Result<impl std::io::Read, Error> {
Ok(self.0.into_body().collect().await.map_err(Error::Hyper)?.aggregate().reader())
let mut body = self.response.into_body().into_data_stream();
let mut res: Vec<u8> = vec![];
loop {
if let Some(size_limit) = self.size_limit {
let (lower, upper) = body.size_hint();
if res.len().wrapping_add(upper.unwrap_or(lower)) > size_limit.min(usize::MAX - 1) {
Err(Error::ConnectionError("response exceeded size limit".into()))?;
}
}
let Some(part) = body.next().await else { break };
let part = part.map_err(Error::Hyper)?;
res.extend(part.as_ref());
}
Ok(io::Cursor::new(res))
}
}

View File

@@ -1,13 +1,13 @@
[package]
name = "std-shims"
version = "0.1.1"
version = "0.1.5"
description = "A series of std shims to make alloc more feasible"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021"
rust-version = "1.80"
rust-version = "1.65"
[package.metadata.docs.rs]
all-features = true
@@ -17,9 +17,11 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] }
hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] }
rustversion = { version = "1", default-features = false }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "fair_mutex", "once", "lazy"] }
hashbrown = { version = "0.16", default-features = false, features = ["default-hasher", "inline-more"], optional = true }
[features]
std = []
alloc = ["hashbrown"]
std = ["alloc", "spin/std"]
default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023 Luke Parker
Copyright (c) 2023-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,6 +1,28 @@
# std shims
# `std` shims
A crate which passes through to std when the default `std` feature is enabled,
yet provides a series of shims when it isn't.
`std-shims` is a Rust crate with two purposes:
- Expand the functionality of `core` and `alloc`
- Polyfill functionality only available on newer version of Rust
`HashSet` and `HashMap` are provided via `hashbrown`.
The goal is to make supporting no-`std` environments, and older versions of
Rust, as simple as possible. For most use cases, replacing `std::` with
`std_shims::` and adding `use std_shims::prelude::*` is sufficient to take full
advantage of `std-shims`.
# API Surface
`std-shims` only aims to have items _mutually available_ between `alloc` (with
extra dependencies) and `std` publicly exposed. Items exclusive to `std`, with
no shims available, will not be exported by `std-shims`.
# Dependencies
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization
primitives are provided via `spin` (avoiding a requirement on
`critical-section`). Sections of `std::io` are independently matched as
possible. `rustversion` is used to detect when to provide polyfills.
# Disclaimer
No guarantee of one-to-one parity is provided. The shims provided aim to be
sufficient for the average case. Pull requests are _welcome_.

View File

@@ -1,7 +1,7 @@
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::collections::*;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use hashbrown::{HashSet, HashMap};
#[cfg(feature = "std")]
pub use std::collections::*;
#[cfg(not(feature = "std"))]
pub use alloc::collections::*;
#[cfg(not(feature = "std"))]
pub use hashbrown::{HashSet, HashMap};

View File

@@ -1,42 +1,74 @@
#[cfg(feature = "std")]
pub use std::io::*;
#[cfg(not(feature = "std"))]
mod shims {
use core::fmt::{Debug, Formatter};
use alloc::{boxed::Box, vec::Vec};
use core::fmt::{self, Debug, Display, Formatter};
#[cfg(feature = "alloc")]
use extern_alloc::{boxed::Box, vec::Vec};
use crate::error::Error as CoreError;
/// The kind of error.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum ErrorKind {
UnexpectedEof,
Other,
}
/// An error.
#[derive(Debug)]
pub struct Error {
kind: ErrorKind,
error: Box<dyn Send + Sync>,
#[cfg(feature = "alloc")]
error: Box<dyn Send + Sync + CoreError>,
}
impl Debug for Error {
fn fmt(&self, fmt: &mut Formatter<'_>) -> core::result::Result<(), core::fmt::Error> {
fmt.debug_struct("Error").field("kind", &self.kind).finish_non_exhaustive()
impl Display for Error {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
<Self as Debug>::fmt(self, f)
}
}
impl CoreError for Error {}
#[cfg(not(feature = "alloc"))]
pub trait IntoBoxSendSyncError {}
#[cfg(not(feature = "alloc"))]
impl<I> IntoBoxSendSyncError for I {}
#[cfg(feature = "alloc")]
pub trait IntoBoxSendSyncError: Into<Box<dyn Send + Sync + CoreError>> {}
#[cfg(feature = "alloc")]
impl<I: Into<Box<dyn Send + Sync + CoreError>>> IntoBoxSendSyncError for I {}
impl Error {
pub fn new<E: 'static + Send + Sync>(kind: ErrorKind, error: E) -> Error {
Error { kind, error: Box::new(error) }
/// Create a new error.
///
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn new<E: 'static + IntoBoxSendSyncError>(kind: ErrorKind, error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind };
#[cfg(feature = "alloc")]
let res = Error { kind, error: error.into() };
res
}
pub fn other<E: 'static + Send + Sync>(error: E) -> Error {
Error { kind: ErrorKind::Other, error: Box::new(error) }
/// Create a new error with `io::ErrorKind::Other` as its kind.
///
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn other<E: 'static + IntoBoxSendSyncError>(error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind: ErrorKind::Other };
#[cfg(feature = "alloc")]
let res = Error { kind: ErrorKind::Other, error: error.into() };
res
}
/// The kind of error.
pub fn kind(&self) -> ErrorKind {
self.kind
}
pub fn into_inner(self) -> Option<Box<dyn Send + Sync>> {
/// Retrieve the inner error.
#[cfg(feature = "alloc")]
pub fn into_inner(self) -> Option<Box<dyn Send + Sync + CoreError>> {
Some(self.error)
}
}
@@ -64,6 +96,12 @@ mod shims {
}
}
impl<R: Read> Read for &mut R {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
R::read(*self, buf)
}
}
pub trait BufRead: Read {
fn fill_buf(&mut self) -> Result<&[u8]>;
fn consume(&mut self, amt: usize);
@@ -88,6 +126,7 @@ mod shims {
}
}
#[cfg(feature = "alloc")]
impl Write for Vec<u8> {
fn write(&mut self, buf: &[u8]) -> Result<usize> {
self.extend(buf);
@@ -95,6 +134,8 @@ mod shims {
}
}
}
#[cfg(not(feature = "std"))]
pub use shims::*;
#[cfg(feature = "std")]
pub use std::io::{ErrorKind, Error, Result, Read, BufRead, Write};

View File

@@ -1,13 +1,102 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
pub extern crate alloc;
#[cfg(not(feature = "alloc"))]
pub use core::*;
#[cfg(not(feature = "alloc"))]
pub use core::{alloc, borrow, ffi, fmt, slice, str, task};
#[cfg(not(feature = "std"))]
#[rustversion::before(1.81)]
pub mod error {
use core::fmt::Debug::Display;
pub trait Error: Debug + Display {}
}
#[cfg(not(feature = "std"))]
#[rustversion::since(1.81)]
pub use core::error;
#[cfg(feature = "alloc")]
extern crate alloc as extern_alloc;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::{alloc, borrow, boxed, ffi, fmt, rc, slice, str, string, task, vec, format};
#[cfg(feature = "std")]
pub use std::{alloc, borrow, boxed, error, ffi, fmt, rc, slice, str, string, task, vec, format};
pub mod sync;
pub mod collections;
pub mod io;
pub mod sync;
pub use alloc::vec;
pub use alloc::str;
pub use alloc::string;
pub mod prelude {
// Shim the `std` prelude
#[cfg(feature = "alloc")]
pub use extern_alloc::{
format, vec,
borrow::ToOwned,
boxed::Box,
vec::Vec,
string::{String, ToString},
};
// Shim `div_ceil`
#[rustversion::before(1.73)]
#[doc(hidden)]
pub trait StdShimsDivCeil {
fn div_ceil(self, rhs: Self) -> Self;
}
#[rustversion::before(1.73)]
mod impl_divceil {
use super::StdShimsDivCeil;
impl StdShimsDivCeil for u8 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u16 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u32 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u64 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u128 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for usize {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
}
// Shim `io::Error::other`
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
#[doc(hidden)]
pub trait StdShimsIoErrorOther {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>;
}
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
impl StdShimsIoErrorOther for std::io::Error {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>,
{
std::io::Error::new(std::io::ErrorKind::Other, error)
}
}
}

View File

@@ -1,19 +1,80 @@
pub use core::sync::*;
pub use alloc::sync::*;
pub use core::sync::atomic;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::sync::{Arc, Weak};
#[cfg(feature = "std")]
pub use std::sync::{Arc, Weak};
mod mutex_shim {
#[cfg(feature = "std")]
pub use std::sync::*;
#[cfg(not(feature = "std"))]
pub use spin::*;
mod spin_mutex {
use core::ops::{Deref, DerefMut};
#[derive(Default, Debug)]
// We wrap this in an `Option` so we can consider `None` as poisoned
pub(super) struct Mutex<T>(spin::Mutex<Option<T>>);
/// An acquired view of a `Mutex`.
pub struct MutexGuard<'mutex, T> {
mutex: spin::MutexGuard<'mutex, Option<T>>,
// This is `Some` for the lifetime of this guard, and is only represented as an `Option` due
// to needing to move it on `Drop` (which solely gives us a mutable reference to `self`)
value: Option<T>,
}
impl<T> Mutex<T> {
pub(super) const fn new(value: T) -> Self {
Self(spin::Mutex::new(Some(value)))
}
pub(super) fn lock(&self) -> MutexGuard<'_, T> {
let mut mutex = self.0.lock();
// Take from the `Mutex` so future acquisitions will see `None` unless this is restored
let value = mutex.take();
// Check the prior acquisition did in fact restore the value
if value.is_none() {
panic!("locking a `spin::Mutex` held by a thread which panicked");
}
MutexGuard { mutex, value }
}
}
impl<T> Deref for MutexGuard<'_, T> {
type Target = T;
fn deref(&self) -> &T {
self.value.as_ref().expect("no value yet checked upon lock acquisition")
}
}
impl<T> DerefMut for MutexGuard<'_, T> {
fn deref_mut(&mut self) -> &mut T {
self.value.as_mut().expect("no value yet checked upon lock acquisition")
}
}
impl<'mutex, T> Drop for MutexGuard<'mutex, T> {
fn drop(&mut self) {
// Restore the value
*self.mutex = self.value.take();
}
}
}
#[cfg(not(feature = "std"))]
pub use spin_mutex::*;
#[cfg(feature = "std")]
pub use std::sync::{Mutex, MutexGuard};
/// A shimmed `Mutex` with an API mutual to `spin` and `std`.
pub struct ShimMutex<T>(Mutex<T>);
impl<T> ShimMutex<T> {
/// Construct a new `Mutex`.
pub const fn new(value: T) -> Self {
Self(Mutex::new(value))
}
/// Acquire a lock on the contents of the `Mutex`.
///
/// This will panic if the `Mutex` was poisoned.
///
/// On no-`std` environments, the implementation presumably defers to that of a spin lock.
pub fn lock(&self) -> MutexGuard<'_, T> {
#[cfg(feature = "std")]
let res = self.0.lock().unwrap();
@@ -25,7 +86,12 @@ mod mutex_shim {
}
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[cfg(feature = "std")]
pub use std::sync::LazyLock;
#[rustversion::before(1.80)]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(feature = "std")]
pub use std::sync::LazyLock;

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2022-2024 Luke Parker
Copyright (c) 2022-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,11 +1,17 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{future::Future, time::Duration};
use core::{
fmt::{self, Debug},
future::Future,
time::Duration,
};
use tokio::sync::mpsc;
mod type_name;
/// A handle for a task.
///
/// The task will only stop running once all handles for it are dropped.
@@ -45,8 +51,6 @@ impl Task {
impl TaskHandle {
/// Tell the task to run now (and not whenever its next iteration on a timer is).
///
/// Panics if the task has been dropped.
pub fn run_now(&self) {
#[allow(clippy::match_same_arms)]
match self.run_now.try_send(()) {
@@ -54,12 +58,22 @@ impl TaskHandle {
// NOP on full, as this task will already be ran as soon as possible
Err(mpsc::error::TrySendError::Full(())) => {}
Err(mpsc::error::TrySendError::Closed(())) => {
// The task should only be closed if all handles are dropped, and this one hasn't been
panic!("task was unexpectedly closed when calling run_now")
}
}
}
}
/// An enum which can't be constructed, representing that the task does not error.
pub enum DoesNotError {}
impl Debug for DoesNotError {
fn fmt(&self, _: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
// This type can't be constructed so we'll never have a `&self` to call this fn with
unreachable!()
}
}
/// A task to be continually ran.
pub trait ContinuallyRan: Sized + Send {
/// The amount of seconds before this task should be polled again.
@@ -69,11 +83,14 @@ pub trait ContinuallyRan: Sized + Send {
/// Upon error, the amount of time waited will be linearly increased until this limit.
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 120;
/// The error potentially yielded upon running an iteration of this task.
type Error: Debug;
/// Run an iteration of the task.
///
/// If this returns `true`, all dependents of the task will immediately have a new iteration ran
/// (without waiting for whatever timer they were already on).
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>>;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>>;
/// Continually run the task.
fn continually_run(
@@ -115,12 +132,20 @@ pub trait ContinuallyRan: Sized + Send {
}
}
Err(e) => {
log::warn!("{}", e);
// Get the type name
let type_name = type_name::strip_type_name(core::any::type_name::<Self>());
// Print the error as a warning, prefixed by the task's type
log::warn!("{type_name}: {e:?}");
increase_sleep_before_next_task(&mut current_sleep_before_next_task);
}
}
// Don't run the task again for another few seconds UNLESS told to run now
/*
We could replace tokio::mpsc with async_channel, tokio::time::sleep with
patchable_async_sleep::sleep, and tokio::select with futures_lite::future::or
It isn't worth the effort when patchable_async_sleep::sleep will still resolve to tokio
*/
tokio::select! {
() = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {},
msg = task.run_now.recv() => {

View File

@@ -0,0 +1,31 @@
/// Strip the modules from a type name.
// This may be of the form `a::b::C`, in which case we only want `C`
pub(crate) fn strip_type_name(full_type_name: &'static str) -> String {
// It also may be `a::b::C<d::e::F>`, in which case, we only attempt to strip `a::b`
let mut by_generics = full_type_name.split('<');
// Strip to just `C`
let full_outer_object_name = by_generics.next().unwrap();
let mut outer_object_name_parts = full_outer_object_name.split("::");
let mut last_part_in_outer_object_name = outer_object_name_parts.next().unwrap();
for part in outer_object_name_parts {
last_part_in_outer_object_name = part;
}
// Push back on the generic terms
let mut type_name = last_part_in_outer_object_name.to_string();
for generic in by_generics {
type_name.push('<');
type_name.push_str(generic);
}
type_name
}
#[test]
fn test_strip_type_name() {
assert_eq!(strip_type_name("core::option::Option"), "Option");
assert_eq!(
strip_type_name("core::option::Option<alloc::string::String>"),
"Option<alloc::string::String>"
);
}

View File

@@ -7,7 +7,9 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.77"
# This must be specified with the patch version, else Rust believes `1.77` < `1.77.0` and will
# refuse to compile due to relying on versions introduced with `1.77.0`
rust-version = "1.77.0"
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))]
//! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation.

View File

@@ -8,7 +8,6 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
[package.metadata.docs.rs]
all-features = true
@@ -22,16 +21,17 @@ zeroize = { version = "^1.5", default-features = false, features = ["std"] }
bitvec = { version = "1", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] }
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] }
ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std"] }
dkg = { package = "dkg-musig", path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
zalloc = { path = "../common/zalloc" }
serai-db = { path = "../common/db" }
@@ -40,28 +40,22 @@ serai-task = { path = "../common/task", version = "0.1" }
messages = { package = "serai-processor-messages", path = "../processor/messages" }
message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary = { package = "tributary-chain", path = "./tributary" }
tributary-sdk = { path = "./tributary-sdk" }
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client-serai = { path = "../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
env_logger = { version = "0.10", default-features = false, features = ["humantime"] }
tokio = { version = "1", default-features = false, features = ["time", "sync", "macros", "rt-multi-thread"] }
serai-cosign = { path = "./cosign" }
serai-coordinator-substrate = { path = "./substrate" }
serai-coordinator-tributary = { path = "./tributary" }
serai-coordinator-p2p = { path = "./p2p" }
serai-coordinator-libp2p-p2p = { path = "./p2p/libp2p" }
[dev-dependencies]
tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] }
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
sp-runtime = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
[features]
longer-reattempts = []
longer-reattempts = ["serai-coordinator-tributary/longer-reattempts"]
parity-db = ["serai-db/parity-db"]
rocksdb = ["serai-db/rocksdb"]

View File

@@ -1,19 +1,29 @@
# Coordinator
- [`tendermint`](/tributary/tendermint) is an implementation of the Tendermint BFT algorithm.
- [`tendermint`](/tributary/tendermint) is an implementation of the Tendermint
BFT algorithm.
- [`tributary`](./tributary) is a micro-blockchain framework. Instead of a producing a blockchain
daemon like the Polkadot SDK or Cosmos SDK intend to, `tributary` is solely intended to be an
embedded asynchronous task within an application.
- [`tributary-sdk`](./tributary-sdk) is a micro-blockchain framework. Instead
of a producing a blockchain daemon like the Polkadot SDK or Cosmos SDK intend
to, `tributary` is solely intended to be an embedded asynchronous task within
an application.
The Serai coordinator spawns a tributary for each validator set it's coordinating. This allows
the participating validators to communicate in a byzantine-fault-tolerant manner (relying on
Tendermint for consensus).
The Serai coordinator spawns a tributary for each validator set it's
coordinating. This allows the participating validators to communicate in a
byzantine-fault-tolerant manner (relying on Tendermint for consensus).
- [`cosign`](./cosign) contains a library to decide which Substrate blocks should be cosigned and
to evaluate cosigns.
- [`cosign`](./cosign) contains a library to decide which Substrate blocks
should be cosigned and to evaluate cosigns.
- [`substrate`](./substrate) contains a library to index the Substrate blockchain and handle its
events.
- [`substrate`](./substrate) contains a library to index the Substrate
blockchain and handle its events.
- [`tributary`](./tributary) is our instantiation of the Tributary SDK for the
Serai processor. It includes the `Transaction` definition and deferred
execution logic.
- [`p2p`](./p2p) is our abstract P2P API to service the Coordinator.
- [`libp2p`](./p2p/libp2p) is our libp2p-backed implementation of the P2P API.
- [`src`](./src) contains the source code for the Coordinator binary itself.

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
@@ -18,12 +18,10 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
blake2 = { version = "0.10", default-features = false, features = ["std"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
@@ -31,3 +29,5 @@ tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" }
serai-cosign-types = { path = "./types" }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2023-2024 Luke Parker
Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -2,7 +2,7 @@ use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_db::*;
use serai_task::ContinuallyRan;
use serai_task::{DoesNotError, ContinuallyRan};
use crate::evaluator::CosignedBlocks;
@@ -25,7 +25,9 @@ pub(crate) struct CosignDelayTask<D: Db> {
}
impl<D: Db> ContinuallyRan for CosignDelayTask<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = DoesNotError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
loop {

View File

@@ -1,5 +1,5 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use std::time::{Duration, Instant, SystemTime};
use serai_db::*;
use serai_task::ContinuallyRan;
@@ -77,10 +77,22 @@ pub(crate) fn currently_evaluated_global_session(getter: &impl Get) -> Option<[u
pub(crate) struct CosignEvaluatorTask<D: Db, R: RequestNotableCosigns> {
pub(crate) db: D,
pub(crate) request: R,
pub(crate) last_request_for_cosigns: Instant,
}
impl<D: Db, R: RequestNotableCosigns> ContinuallyRan for CosignEvaluatorTask<D, R> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
let should_request_cosigns = |last_request_for_cosigns: &mut Instant| {
const REQUEST_COSIGNS_SPACING: Duration = Duration::from_secs(60);
if Instant::now() < (*last_request_for_cosigns + REQUEST_COSIGNS_SPACING) {
return false;
}
*last_request_for_cosigns = Instant::now();
true
};
async move {
let mut known_cosign = None;
let mut made_progress = false;
@@ -116,12 +128,13 @@ impl<D: Db, R: RequestNotableCosigns> ContinuallyRan for CosignEvaluatorTask<D,
// Check if the sum weight doesn't cross the required threshold
if weight_cosigned < (((global_session_info.total_stake * 83) / 100) + 1) {
// Request the necessary cosigns over the network
// TODO: Add a timer to ensure this isn't called too often
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
if should_request_cosigns(&mut self.last_request_for_cosigns) {
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
}
// We return an error so the delay before this task is run again increases
return Err(format!(
"notable block (#{block_number}) wasn't yet cosigned. this should resolve shortly",
@@ -178,11 +191,13 @@ impl<D: Db, R: RequestNotableCosigns> ContinuallyRan for CosignEvaluatorTask<D,
// If this session hasn't yet produced notable cosigns, then we presume we'll see
// the desired non-notable cosigns as part of normal operations, without needing to
// explicitly request them
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
if should_request_cosigns(&mut self.last_request_for_cosigns) {
self
.request
.request_notable_cosigns(global_session)
.await
.map_err(|e| format!("{e:?}"))?;
}
// We return an error so the delay before this task is run again increases
return Err(format!(
"block (#{block_number}) wasn't yet cosigned. this should resolve shortly",

View File

@@ -1,10 +1,21 @@
use core::future::Future;
use std::collections::HashMap;
use std::{sync::Arc, collections::HashMap};
use serai_client::{
primitives::{SeraiAddress, Amount},
validator_sets::primitives::ValidatorSet,
Serai,
use blake2::{Digest, Blake2b256};
use serai_client_serai::{
abi::{
primitives::{
network_id::{ExternalNetworkId, NetworkId},
balance::Amount,
crypto::Public,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
merkle::IncrementalUnbalancedMerkleTree,
},
validator_sets::Event,
},
Serai, Events,
};
use serai_db::*;
@@ -12,9 +23,20 @@ use serai_task::ContinuallyRan;
use crate::*;
#[derive(BorshSerialize, BorshDeserialize)]
struct Set {
session: Session,
key: Public,
stake: Amount,
}
create_db!(
CosignIntend {
ScanCosignFrom: () -> u64,
BuildsUpon: () -> IncrementalUnbalancedMerkleTree,
Stakes: (network: ExternalNetworkId, validator: SeraiAddress) -> Amount,
Validators: (set: ExternalValidatorSet) -> Vec<SeraiAddress>,
LatestSet: (network: ExternalNetworkId) -> Set,
}
);
@@ -28,92 +50,162 @@ db_channel! {
CosignIntendChannels {
GlobalSessionsChannel: () -> ([u8; 32], GlobalSession),
BlockEvents: () -> BlockEventData,
IntendedCosigns: (set: ValidatorSet) -> CosignIntent,
IntendedCosigns: (set: ExternalValidatorSet) -> CosignIntent,
}
}
async fn block_has_events_justifying_a_cosign(
serai: &Serai,
block_number: u64,
) -> Result<(Block, HasEvents), String> {
) -> Result<(Block, Events, HasEvents), String> {
let block = serai
.finalized_block_by_number(block_number)
.block_by_number(block_number)
.await
.map_err(|e| format!("{e:?}"))?
.ok_or_else(|| "couldn't get block which should've been finalized".to_string())?;
let serai = serai.as_of(block.hash());
let events = serai.events(block.header.hash()).await.map_err(|e| format!("{e:?}"))?;
if !serai.validator_sets().key_gen_events().await.map_err(|e| format!("{e:?}"))?.is_empty() {
return Ok((block, HasEvents::Notable));
if events.validator_sets().set_keys_events().next().is_some() {
return Ok((block, events, HasEvents::Notable));
}
if !serai.coins().burn_with_instruction_events().await.map_err(|e| format!("{e:?}"))?.is_empty() {
return Ok((block, HasEvents::NonNotable));
if events.coins().burn_with_instruction_events().next().is_some() {
return Ok((block, events, HasEvents::NonNotable));
}
Ok((block, HasEvents::No))
Ok((block, events, HasEvents::No))
}
// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this
// block.
fn cosigning_sets(getter: &impl Get) -> Vec<(ExternalValidatorSet, Public, Amount)> {
let mut sets = vec![];
for network in ExternalNetworkId::all() {
let Some(Set { session, key, stake }) = LatestSet::get(getter, network) else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ExternalValidatorSet { network, session }, key, stake));
}
sets
}
/// A task to determine which blocks we should intend to cosign.
pub(crate) struct CosignIntendTask<D: Db> {
pub(crate) db: D,
pub(crate) serai: Serai,
pub(crate) serai: Arc<Serai>,
}
impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1);
let latest_block_number =
self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number();
self.serai.latest_finalized_block_number().await.map_err(|e| format!("{e:?}"))?;
for block_number in start_block_number ..= latest_block_number {
let mut txn = self.db.txn();
let (block, mut has_events) =
let (block, events, mut has_events) =
block_has_events_justifying_a_cosign(&self.serai, block_number)
.await
.map_err(|e| format!("{e:?}"))?;
let mut builds_upon =
BuildsUpon::get(&txn).unwrap_or(IncrementalUnbalancedMerkleTree::new());
// Check we are indexing a linear chain
if (block_number > 1) &&
(<[u8; 32]>::from(block.header.parent_hash) !=
SubstrateBlocks::get(&txn, block_number - 1)
.expect("indexing a block but haven't indexed its parent"))
if block.header.builds_upon() !=
builds_upon.clone().calculate(serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG)
{
Err(format!(
"node's block #{block_number} doesn't build upon the block #{} prior indexed",
block_number - 1
))?;
}
SubstrateBlocks::set(&mut txn, block_number, &block.hash());
let block_hash = block.header.hash();
SubstrateBlockHash::set(&mut txn, block_number, &block_hash);
builds_upon.append(
serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG,
Blake2b256::new_with_prefix([serai_client_serai::abi::BLOCK_HEADER_LEAF_TAG])
.chain_update(block_hash.0)
.finalize()
.into(),
);
BuildsUpon::set(&mut txn, &builds_upon);
// Update the stakes
for event in events.validator_sets().allocation_events() {
let Event::Allocation { validator, network, amount } = event else {
panic!("event from `allocation_events` wasn't `Event::Allocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 + amount.0));
}
for event in events.validator_sets().deallocation_events() {
let Event::Deallocation { validator, network, amount, timeline: _ } = event else {
panic!("event from `deallocation_events` wasn't `Event::Deallocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 - amount.0));
}
// Handle decided sets
for event in events.validator_sets().set_decided_events() {
let Event::SetDecided { set, validators } = event else {
panic!("event from `set_decided_events` wasn't `Event::SetDecided`")
};
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
Validators::set(
&mut txn,
set,
&validators.iter().map(|(validator, _key_shares)| *validator).collect(),
);
}
// Handle declarations of the latest set
for event in events.validator_sets().set_keys_events() {
let Event::SetKeys { set, key_pair } = event else {
panic!("event from `set_keys_events` wasn't `Event::SetKeys`")
};
let mut stake = 0;
for validator in
Validators::take(&mut txn, *set).expect("set which wasn't decided set keys")
{
stake += Stakes::get(&txn, set.network, validator).unwrap_or(Amount(0)).0;
}
LatestSet::set(
&mut txn,
set.network,
&Set { session: set.session, key: key_pair.0, stake: Amount(stake) },
);
}
let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn);
// If this is notable, it creates a new global session, which we index into the database
// now
if has_events == HasEvents::Notable {
let serai = self.serai.as_of(block.hash());
let sets_and_keys = cosigning_sets(&serai).await?;
let global_session =
GlobalSession::id(sets_and_keys.iter().map(|(set, _key)| *set).collect());
let sets_and_keys_and_stakes = cosigning_sets(&txn);
let global_session = GlobalSession::id(
sets_and_keys_and_stakes.iter().map(|(set, _key, _stake)| *set).collect(),
);
let mut sets = Vec::with_capacity(sets_and_keys.len());
let mut keys = HashMap::with_capacity(sets_and_keys.len());
let mut stakes = HashMap::with_capacity(sets_and_keys.len());
let mut sets = Vec::with_capacity(sets_and_keys_and_stakes.len());
let mut keys = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut stakes = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut total_stake = 0;
for (set, key) in &sets_and_keys {
sets.push(*set);
keys.insert(set.network, SeraiAddress::from(*key));
let stake = serai
.validator_sets()
.total_allocated_stake(set.network)
.await
.map_err(|e| format!("{e:?}"))?
.unwrap_or(Amount(0))
.0;
stakes.insert(set.network, stake);
total_stake += stake;
for (set, key, stake) in sets_and_keys_and_stakes {
sets.push(set);
keys.insert(set.network, key);
stakes.insert(set.network, stake.0);
total_stake += stake.0;
}
if total_stake == 0 {
Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?;
@@ -152,14 +244,14 @@ impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
// Tell each set of their expectation to cosign this block
for set in global_session_info.sets {
log::debug!("{:?} will be cosigning block #{block_number}", set);
log::debug!("{set:?} will be cosigning block #{block_number}");
IntendedCosigns::send(
&mut txn,
set,
&CosignIntent {
global_session: global_session_for_this_block,
block_number,
block_hash: block.hash(),
block_hash,
notable: has_events == HasEvents::Notable,
},
);

View File

@@ -1,24 +1,33 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{fmt::Debug, future::Future};
use std::collections::HashMap;
use std::{sync::Arc, collections::HashMap, time::Instant};
use blake2::{Digest, Blake2s256};
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
primitives::{NetworkId, SeraiAddress},
validator_sets::primitives::{Session, ValidatorSet, KeyPair},
Public, Block, Serai, TemporalSerai,
use serai_client_serai::{
abi::{
primitives::{
BlockHash,
crypto::{Public, KeyPair},
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
},
Block,
},
Serai, State,
};
use serai_db::*;
use serai_task::*;
pub use serai_cosign_types::*;
/// The cosigns which are intended to be performed.
mod intend;
/// The evaluator of the cosigns.
@@ -28,9 +37,6 @@ mod delay;
pub use delay::BROADCAST_FREQUENCY;
use delay::LatestCosignedBlockNumber;
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// A 'global session', defined as all validator sets used for cosigning at a given moment.
///
/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign
@@ -52,13 +58,13 @@ pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
#[derive(Debug, BorshSerialize, BorshDeserialize)]
pub(crate) struct GlobalSession {
pub(crate) start_block_number: u64,
pub(crate) sets: Vec<ValidatorSet>,
pub(crate) keys: HashMap<NetworkId, SeraiAddress>,
pub(crate) stakes: HashMap<NetworkId, u64>,
pub(crate) sets: Vec<ExternalValidatorSet>,
pub(crate) keys: HashMap<ExternalNetworkId, Public>,
pub(crate) stakes: HashMap<ExternalNetworkId, u64>,
pub(crate) total_stake: u64,
}
impl GlobalSession {
fn id(mut cosigners: Vec<ValidatorSet>) -> [u8; 32] {
fn id(mut cosigners: Vec<ExternalValidatorSet>) -> [u8; 32] {
cosigners.sort_by_key(|a| borsh::to_vec(a).unwrap());
Blake2s256::digest(borsh::to_vec(&cosigners).unwrap()).into()
}
@@ -78,56 +84,12 @@ enum HasEvents {
No,
}
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
global_session: [u8; 32],
/// The number of the block to cosign.
block_number: u64,
/// The hash of the block to cosign.
block_hash: [u8; 32],
/// If this cosign must be handled before further cosigns are.
notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// The actual cosigner.
pub cosigner: NetworkId,
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
fn verify_signature(&self, signer: serai_client::Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.encode(), &signature).is_ok()
}
}
create_db! {
Cosign {
// The following are populated by the intend task and used throughout the library
// An index of Substrate blocks
SubstrateBlocks: (block_number: u64) -> [u8; 32],
SubstrateBlockHash: (block_number: u64) -> BlockHash,
// A mapping from a global session's ID to its relevant information.
GlobalSessions: (global_session: [u8; 32]) -> GlobalSession,
// The last block to be cosigned by a global session.
@@ -148,7 +110,10 @@ create_db! {
// one notable block. All validator sets will explicitly produce a cosign for their notable
// block, causing the latest cosigned block for a global session to either be the global
// session's notable cosigns or the network's latest cosigns.
NetworksLatestCosignedBlock: (global_session: [u8; 32], network: NetworkId) -> SignedCosign,
NetworksLatestCosignedBlock: (
global_session: [u8; 32],
network: ExternalNetworkId
) -> SignedCosign,
// Cosigns received for blocks not locally recognized as finalized.
Faults: (global_session: [u8; 32]) -> Vec<SignedCosign>,
// The global session which faulted.
@@ -156,62 +121,6 @@ create_db! {
}
}
/// Fetch the keys used for cosigning by a specific network.
async fn keys_for_network(
serai: &TemporalSerai<'_>,
network: NetworkId,
) -> Result<Option<(Session, KeyPair)>, String> {
// The Serai network never cosigns so it has no keys for cosigning
if network == NetworkId::Serai {
return Ok(None);
}
let Some(latest_session) =
serai.validator_sets().session(network).await.map_err(|e| format!("{e:?}"))?
else {
// If this network hasn't had a session declared, move on
return Ok(None);
};
// Get the keys for the latest session
if let Some(keys) = serai
.validator_sets()
.keys(ValidatorSet { network, session: latest_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((latest_session, keys)));
}
// If the latest session has yet to set keys, use the prior session
if let Some(prior_session) = latest_session.0.checked_sub(1).map(Session) {
if let Some(keys) = serai
.validator_sets()
.keys(ValidatorSet { network, session: prior_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((prior_session, keys)));
}
}
Ok(None)
}
/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning as of this block.
async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result<Vec<(ValidatorSet, Public)>, String> {
let mut sets = Vec::with_capacity(serai_client::primitives::NETWORKS.len());
for network in serai_client::primitives::NETWORKS {
let Some((session, keys)) = keys_for_network(serai, network).await? else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ValidatorSet { network, session }, keys.0));
}
Ok(sets)
}
/// An object usable to request notable cosigns for a block.
pub trait RequestNotableCosigns: 'static + Send {
/// The error type which may be encountered when requesting notable cosigns.
@@ -228,6 +137,43 @@ pub trait RequestNotableCosigns: 'static + Send {
#[derive(Debug)]
pub struct Faulted;
/// An error incurred while intaking a cosign.
#[derive(Debug)]
pub enum IntakeCosignError {
/// Cosign is for a not-yet-indexed block
NotYetIndexedBlock,
/// A later cosign for this cosigner has already been handled
StaleCosign,
/// The cosign's global session isn't recognized
UnrecognizedGlobalSession,
/// The cosign is for a block before its global session starts
BeforeGlobalSessionStart,
/// The cosign is for a block after its global session ends
AfterGlobalSessionEnd,
/// The cosign's signing network wasn't a participant in this global session
NonParticipatingNetwork,
/// The cosign had an invalid signature
InvalidSignature,
/// The cosign is for a global session which has yet to have its declaration block cosigned
FutureGlobalSession,
}
impl IntakeCosignError {
/// If this error is temporal to the local view
pub fn temporal(&self) -> bool {
match self {
IntakeCosignError::NotYetIndexedBlock |
IntakeCosignError::StaleCosign |
IntakeCosignError::UnrecognizedGlobalSession |
IntakeCosignError::FutureGlobalSession => true,
IntakeCosignError::BeforeGlobalSessionStart |
IntakeCosignError::AfterGlobalSessionEnd |
IntakeCosignError::NonParticipatingNetwork |
IntakeCosignError::InvalidSignature => false,
}
}
}
/// The interface to manage cosigning with.
pub struct Cosigning<D: Db> {
db: D,
@@ -239,7 +185,7 @@ impl<D: Db> Cosigning<D> {
/// only used once at any given time.
pub fn spawn<R: RequestNotableCosigns>(
db: D,
serai: Serai,
serai: Arc<Serai>,
request: R,
tasks_to_run_upon_cosigning: Vec<TaskHandle>,
) -> Self {
@@ -251,8 +197,12 @@ impl<D: Db> Cosigning<D> {
.continually_run(intend_task, vec![evaluator_task_handle]),
);
tokio::spawn(
(evaluator::CosignEvaluatorTask { db: db.clone(), request })
.continually_run(evaluator_task, vec![delay_task_handle]),
(evaluator::CosignEvaluatorTask {
db: db.clone(),
request,
last_request_for_cosigns: Instant::now(),
})
.continually_run(evaluator_task, vec![delay_task_handle]),
);
tokio::spawn(
(delay::CosignDelayTask { db: db.clone() })
@@ -270,14 +220,17 @@ impl<D: Db> Cosigning<D> {
Ok(LatestCosignedBlockNumber::get(getter).unwrap_or(0))
}
/// Fetch an cosigned Substrate block by its block number.
pub fn cosigned_block(getter: &impl Get, block_number: u64) -> Result<Option<[u8; 32]>, Faulted> {
/// Fetch a cosigned Substrate block's hash by its block number.
pub fn cosigned_block(
getter: &impl Get,
block_number: u64,
) -> Result<Option<BlockHash>, Faulted> {
if block_number > Self::latest_cosigned_block_number(getter)? {
return Ok(None);
}
Ok(Some(
SubstrateBlocks::get(getter, block_number).expect("cosigned block but didn't index it"),
SubstrateBlockHash::get(getter, block_number).expect("cosigned block but didn't index it"),
))
}
@@ -286,8 +239,8 @@ impl<D: Db> Cosigning<D> {
/// If this global session hasn't produced any notable cosigns, this will return the latest
/// cosigns for this session.
pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec<SignedCosign> {
let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len());
for network in serai_client::primitives::NETWORKS {
let mut cosigns = vec![];
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) {
cosigns.push(cosign);
}
@@ -304,7 +257,7 @@ impl<D: Db> Cosigning<D> {
let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults");
// Also include all of our recognized-as-honest cosigns in an attempt to induce fault
// identification in those who see the faulty cosigns as honest
for network in serai_client::primitives::NETWORKS {
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) {
if cosign.cosign.global_session == faulted {
cosigns.push(cosign);
@@ -316,8 +269,8 @@ impl<D: Db> Cosigning<D> {
let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else {
return vec![];
};
let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len());
for network in serai_client::primitives::NETWORKS {
let mut cosigns = vec![];
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) {
cosigns.push(cosign);
}
@@ -326,27 +279,16 @@ impl<D: Db> Cosigning<D> {
}
}
/// Intake a cosign from the Serai network.
///
/// - Returns Err(_) if there was an error trying to validate the cosign and it should be retired
/// later.
/// - Returns Ok(true) if the cosign was successfully handled or could not be handled at this
/// time.
/// - Returns Ok(false) if the cosign was invalid.
//
// We collapse a cosign which shouldn't be handled yet into a valid cosign (`Ok(true)`) as we
// assume we'll either explicitly request it if we need it or we'll naturally see it (or a later,
// more relevant, cosign) again.
/// Intake a cosign.
//
// Takes `&mut self` as this should only be called once at any given moment.
// TODO: Don't overload bool here
pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result<bool, String> {
pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result<(), IntakeCosignError> {
let cosign = &signed_cosign.cosign;
let network = cosign.cosigner;
// Check our indexed blockchain includes a block with this block number
let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else {
return Ok(true);
let Some(our_block_hash) = SubstrateBlockHash::get(&self.db, cosign.block_number) else {
Err(IntakeCosignError::NotYetIndexedBlock)?
};
let faulty = cosign.block_hash != our_block_hash;
@@ -356,20 +298,19 @@ impl<D: Db> Cosigning<D> {
NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network)
{
if existing.cosign.block_number >= cosign.block_number {
return Ok(true);
Err(IntakeCosignError::StaleCosign)?;
}
}
}
let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else {
// Unrecognized global session
return Ok(true);
Err(IntakeCosignError::UnrecognizedGlobalSession)?
};
// Check the cosigned block number is in range to the global session
if cosign.block_number < global_session.start_block_number {
// Cosign is for a block predating the global session
return Ok(false);
Err(IntakeCosignError::BeforeGlobalSessionStart)?;
}
if !faulty {
// This prevents a malicious validator set, on the same chain, from producing a cosign after
@@ -377,22 +318,17 @@ impl<D: Db> Cosigning<D> {
if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) {
if cosign.block_number > last_block {
// Cosign is for a block after the last block this global session should have signed
return Ok(false);
Err(IntakeCosignError::AfterGlobalSessionEnd)?;
}
}
}
// Check the cosign's signature
{
let key = Public::from({
let Some(key) = global_session.keys.get(&network) else {
return Ok(false);
};
*key
});
let key =
*global_session.keys.get(&network).ok_or(IntakeCosignError::NonParticipatingNetwork)?;
if !signed_cosign.verify_signature(key) {
return Ok(false);
Err(IntakeCosignError::InvalidSignature)?;
}
}
@@ -408,7 +344,7 @@ impl<D: Db> Cosigning<D> {
// block declaring it was cosigned
if (global_session.start_block_number - 1) > latest_cosigned_block_number {
drop(txn);
return Ok(true);
return Err(IntakeCosignError::FutureGlobalSession);
}
// This is safe as it's in-range and newer, as prior checked since it isn't faulty
@@ -422,9 +358,10 @@ impl<D: Db> Cosigning<D> {
let mut weight_cosigned = 0;
for fault in &faults {
let Some(stake) = global_session.stakes.get(&fault.cosign.cosigner) else {
Err("cosigner with recognized key didn't have a stake entry saved".to_string())?
};
let stake = global_session
.stakes
.get(&fault.cosign.cosigner)
.expect("cosigner with recognized key didn't have a stake entry saved");
weight_cosigned += stake;
}
@@ -436,15 +373,15 @@ impl<D: Db> Cosigning<D> {
}
txn.commit();
Ok(true)
Ok(())
}
/// Receive intended cosigns to produce for this ValidatorSet.
/// Receive intended cosigns to produce for this ExternalValidatorSet.
///
/// All cosigns intended, up to and including the next notable cosign, are returned.
///
/// This will drain the internal channel and not re-yield these intentions again.
pub fn intended_cosigns(txn: &mut impl DbTxn, set: ValidatorSet) -> Vec<CosignIntent> {
pub fn intended_cosigns(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Vec<CosignIntent> {
let mut res: Vec<CosignIntent> = vec![];
// While we have yet to find a notable cosign...
while !res.last().map(|cosign| cosign.notable).unwrap_or(false) {

View File

@@ -0,0 +1,25 @@
[package]
name = "serai-cosign-types"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2024 Luke Parker
Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -0,0 +1,72 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![deny(missing_docs)]
//! Types used when cosigning Serai. For more info, please see `serai-cosign`.
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{BlockHash, crypto::Public, network_id::ExternalNetworkId};
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
borsh::to_vec(self).unwrap()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
/// Verify a cosign's signature.
pub fn verify_signature(&self, signer: Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
@@ -22,12 +22,12 @@ borsh = { version = "1", default-features = false, features = ["std", "derive",
serai-db = { path = "../../common/db", version = "0.1" }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] }
serai-cosign = { path = "../cosign" }
tributary = { package = "tributary-chain", path = "../tributary" }
tributary-sdk = { path = "../tributary-sdk" }
async-channel = { version = "2", default-features = false, features = ["std"] }
futures-lite = { version = "2", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync", "macros"] }
log = { version = "0.4", default-features = false, features = ["std"] }
serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -1,3 +1,3 @@
# Serai Coordinator P2P
The P2P abstraction used by Serai's coordinator.
The P2P abstraction used by Serai's coordinator, and tasks over it.

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
rust-version = "1.87"
[package.metadata.docs.rs]
all-features = true
@@ -23,20 +23,19 @@ async-trait = { version = "0.1", default-features = false }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std"] }
blake2 = { version = "0.10", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-client-serai = { path = "../../../substrate/client/serai", default-features = false }
serai-cosign = { path = "../../cosign" }
tributary = { package = "tributary-chain", path = "../../tributary" }
tributary-sdk = { path = "../../tributary-sdk" }
void = { version = "1", default-features = false }
futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] }
libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] }
libp2p = { version = "0.56", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] }
log = { version = "0.4", default-features = false, features = ["std"] }
serai-task = { path = "../../../common/task", version = "0.1" }

View File

@@ -7,12 +7,11 @@ use rand_core::{RngCore, OsRng};
use blake2::{Digest, Blake2s256};
use schnorrkel::{Keypair, PublicKey, Signature};
use serai_client::primitives::PublicKey as Public;
use serai_client_serai::abi::primitives::crypto::Public;
use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use libp2p::{
core::UpgradeInfo,
InboundUpgrade, OutboundUpgrade,
core::upgrade::{UpgradeInfo, InboundConnectionUpgrade, OutboundConnectionUpgrade},
identity::{self, PeerId},
noise,
};
@@ -105,7 +104,7 @@ impl OnlyValidators {
.verify_simple(PROTOCOL.as_bytes(), &msg, &sig)
.map_err(|_| io::Error::other("invalid signature"))?;
Ok(peer_id_from_public(Public::from_raw(public_key.to_bytes())))
Ok(peer_id_from_public(Public(public_key.to_bytes())))
}
}
@@ -119,12 +118,18 @@ impl UpgradeInfo for OnlyValidators {
}
}
impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> InboundUpgrade<S> for OnlyValidators {
impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> InboundConnectionUpgrade<S>
for OnlyValidators
{
type Output = (PeerId, noise::Output<S>);
type Error = io::Error;
type Future = Pin<Box<dyn Send + Future<Output = Result<Self::Output, Self::Error>>>>;
fn upgrade_inbound(self, socket: S, info: Self::Info) -> Self::Future {
fn upgrade_inbound(
self,
socket: S,
info: <Self as UpgradeInfo>::Info,
) -> <Self as InboundConnectionUpgrade<S>>::Future {
Box::pin(async move {
let (dialer_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair)
.unwrap()
@@ -147,12 +152,18 @@ impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> InboundUpgrade<S> for O
}
}
impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> OutboundUpgrade<S> for OnlyValidators {
impl<S: 'static + Send + Unpin + AsyncRead + AsyncWrite> OutboundConnectionUpgrade<S>
for OnlyValidators
{
type Output = (PeerId, noise::Output<S>);
type Error = io::Error;
type Future = Pin<Box<dyn Send + Future<Output = Result<Self::Output, Self::Error>>>>;
fn upgrade_outbound(self, socket: S, info: Self::Info) -> Self::Future {
fn upgrade_outbound(
self,
socket: S,
info: <Self as UpgradeInfo>::Info,
) -> <Self as OutboundConnectionUpgrade<S>>::Future {
Box::pin(async move {
let (listener_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair)
.unwrap()

View File

@@ -1,11 +1,11 @@
use core::future::Future;
use std::collections::HashSet;
use core::{future::Future, str::FromStr};
use std::{sync::Arc, collections::HashSet};
use rand_core::{RngCore, OsRng};
use tokio::sync::mpsc;
use serai_client::Serai;
use serai_client_serai::{RpcError, Serai};
use libp2p::{
core::multiaddr::{Protocol, Multiaddr},
@@ -29,14 +29,18 @@ const TARGET_PEERS_PER_NETWORK: usize = 5;
// TODO const TARGET_DIALED_PEERS_PER_NETWORK: usize = 3;
pub(crate) struct DialTask {
serai: Serai,
serai: Arc<Serai>,
validators: Validators,
peers: Peers,
to_dial: mpsc::UnboundedSender<DialOpts>,
}
impl DialTask {
pub(crate) fn new(serai: Serai, peers: Peers, to_dial: mpsc::UnboundedSender<DialOpts>) -> Self {
pub(crate) fn new(
serai: Arc<Serai>,
peers: Peers,
to_dial: mpsc::UnboundedSender<DialOpts>,
) -> Self {
DialTask { serai: serai.clone(), validators: Validators::new(serai).0, peers, to_dial }
}
}
@@ -46,7 +50,9 @@ impl ContinuallyRan for DialTask {
const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
self.validators.update().await?;
@@ -79,8 +85,7 @@ impl ContinuallyRan for DialTask {
.unwrap_or(0)
.saturating_sub(1))
{
let mut potential_peers =
self.serai.p2p_validators(network).await.map_err(|e| format!("{e:?}"))?;
let mut potential_peers = self.serai.p2p_validators(network).await?;
for _ in 0 .. (TARGET_PEERS_PER_NETWORK - peer_count) {
if potential_peers.is_empty() {
break;
@@ -89,6 +94,13 @@ impl ContinuallyRan for DialTask {
usize::try_from(OsRng.next_u64() % u64::try_from(potential_peers.len()).unwrap())
.unwrap();
let randomly_selected_peer = potential_peers.swap_remove(index_to_dial);
let Ok(randomly_selected_peer) = libp2p::Multiaddr::from_str(&randomly_selected_peer)
else {
log::error!(
"peer from substrate wasn't a valid `Multiaddr`: {randomly_selected_peer}"
);
continue;
};
log::info!("found peer from substrate: {randomly_selected_peer}");

View File

@@ -13,7 +13,7 @@ pub use libp2p::gossipsub::Event;
use serai_cosign::SignedCosign;
// Block size limit + 16 KB of space for signatures/metadata
pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 16384;
pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary_sdk::BLOCK_SIZE_LIMIT + 16384;
const LIBP2P_PROTOCOL: &str = "/serai/coordinator/gossip/1.0.0";
const BASE_TOPIC: &str = "/";
@@ -42,9 +42,10 @@ pub(crate) type Behavior = Behaviour<IdentityTransform, AllowAllSubscriptionFilt
pub(crate) fn new_behavior() -> Behavior {
// The latency used by the Tendermint protocol, used here as the gossip epoch duration
// libp2p-rs defaults to 1 second, whereas ours will be ~2
let heartbeat_interval = tributary::tendermint::LATENCY_TIME;
let heartbeat_interval = tributary_sdk::tendermint::LATENCY_TIME;
// The amount of heartbeats which will occur within a single Tributary block
let heartbeats_per_block = tributary::tendermint::TARGET_BLOCK_TIME.div_ceil(heartbeat_interval);
let heartbeats_per_block =
tributary_sdk::tendermint::TARGET_BLOCK_TIME.div_ceil(heartbeat_interval);
// libp2p-rs defaults to 5, whereas ours will be ~8
let heartbeats_to_keep = 2 * heartbeats_per_block;
// libp2p-rs defaults to 3 whereas ours will be ~4

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
@@ -13,13 +13,14 @@ use rand_core::{RngCore, OsRng};
use zeroize::Zeroizing;
use schnorrkel::Keypair;
use serai_client::{
primitives::{NetworkId, PublicKey},
validator_sets::primitives::ValidatorSet,
use serai_client_serai::{
abi::primitives::{
crypto::Public, network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet,
},
Serai,
};
use tokio::sync::{mpsc, Mutex, RwLock};
use tokio::sync::{mpsc, oneshot, Mutex, RwLock};
use serai_task::{Task, ContinuallyRan};
@@ -35,7 +36,7 @@ use libp2p::{
SwarmBuilder,
};
use serai_coordinator_p2p::{oneshot, Heartbeat, TributaryBlockWithCommit};
use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit};
/// A struct to sync the validators from the Serai node in order to keep track of them.
mod validators;
@@ -50,7 +51,7 @@ mod ping;
/// The request-response messages and behavior
mod reqres;
use reqres::{RequestId, Request, Response};
use reqres::{InboundRequestId, Request, Response};
/// The gossip messages and behavior
mod gossip;
@@ -66,15 +67,7 @@ use dial::DialTask;
const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o')
// usize::max, manually implemented, as max isn't a const fn
const MAX_LIBP2P_MESSAGE_SIZE: usize =
if gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE > reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE {
gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE
} else {
reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE
};
fn peer_id_from_public(public: PublicKey) -> PeerId {
fn peer_id_from_public(public: Public) -> PeerId {
// 0 represents the identity Multihash, that no hash was performed
// It's an internal constant so we can't refer to the constant inside libp2p
PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap()
@@ -112,7 +105,7 @@ impl serai_coordinator_p2p::Peer<'_> for Peer<'_> {
#[derive(Clone)]
struct Peers {
peers: Arc<RwLock<HashMap<NetworkId, HashSet<PeerId>>>>,
peers: Arc<RwLock<HashMap<ExternalNetworkId, HashSet<PeerId>>>>,
}
// Consider adding identify/kad/autonat/rendevous/(relay + dcutr). While we currently use the Serai
@@ -131,33 +124,36 @@ struct Behavior {
gossip: gossip::Behavior,
}
/// The libp2p-backed P2P implementation.
///
/// The P2p trait implementation does not support backpressure and is expected to be fully
/// utilized. Failure to poll the entire API will cause unbounded memory growth.
#[allow(clippy::type_complexity)]
#[derive(Clone)]
pub struct Libp2p {
struct Libp2pInner {
peers: Peers,
gossip: mpsc::UnboundedSender<Message>,
outbound_requests: mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender<Response>)>,
tributary_gossip: Arc<Mutex<mpsc::UnboundedReceiver<([u8; 32], Vec<u8>)>>>,
tributary_gossip: Mutex<mpsc::UnboundedReceiver<([u8; 32], Vec<u8>)>>,
signed_cosigns: Arc<Mutex<mpsc::UnboundedReceiver<SignedCosign>>>,
signed_cosigns: Mutex<mpsc::UnboundedReceiver<SignedCosign>>,
signed_cosigns_send: mpsc::UnboundedSender<SignedCosign>,
heartbeat_requests: Arc<Mutex<mpsc::UnboundedReceiver<(RequestId, ValidatorSet, [u8; 32])>>>,
notable_cosign_requests: Arc<Mutex<mpsc::UnboundedReceiver<(RequestId, [u8; 32])>>>,
inbound_request_responses: mpsc::UnboundedSender<(RequestId, Response)>,
heartbeat_requests:
Mutex<mpsc::UnboundedReceiver<(InboundRequestId, ExternalValidatorSet, [u8; 32])>>,
notable_cosign_requests: Mutex<mpsc::UnboundedReceiver<(InboundRequestId, [u8; 32])>>,
inbound_request_responses: mpsc::UnboundedSender<(InboundRequestId, Response)>,
}
/// The libp2p-backed P2P implementation.
///
/// The P2p trait implementation does not support backpressure and is expected to be fully
/// utilized. Failure to poll the entire API will cause unbounded memory growth.
#[derive(Clone)]
pub struct Libp2p(Arc<Libp2pInner>);
impl Libp2p {
/// Create a new libp2p-backed P2P instance.
///
/// This will spawn all of the internal tasks necessary for functioning.
pub fn new(serai_key: &Zeroizing<Keypair>, serai: Serai) -> Libp2p {
pub fn new(serai_key: &Zeroizing<Keypair>, serai: Arc<Serai>) -> Libp2p {
// Define the object we track peers with
let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) };
@@ -174,19 +170,9 @@ impl Libp2p {
Ok(OnlyValidators { serai_key: serai_key.clone(), noise_keypair: noise_keypair.clone() })
};
let new_yamux = || {
let mut config = yamux::Config::default();
// 1 MiB default + max message size
config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE);
// 256 KiB default + max message size
config
.set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap());
config
};
let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519())
.with_tokio()
.with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux)
.with_tcp(TcpConfig::default().nodelay(true), new_only_validators, yamux::Config::default)
.unwrap()
.with_behaviour(|_| Behavior {
allow_list: allow_block_list::Behaviour::default(),
@@ -239,28 +225,29 @@ impl Libp2p {
inbound_request_responses_recv,
);
Libp2p {
Libp2p(Arc::new(Libp2pInner {
peers,
gossip: gossip_send,
outbound_requests: outbound_requests_send,
tributary_gossip: Arc::new(Mutex::new(tributary_gossip_recv)),
tributary_gossip: Mutex::new(tributary_gossip_recv),
signed_cosigns: Arc::new(Mutex::new(signed_cosigns_recv)),
signed_cosigns: Mutex::new(signed_cosigns_recv),
signed_cosigns_send,
heartbeat_requests: Arc::new(Mutex::new(heartbeat_requests_recv)),
notable_cosign_requests: Arc::new(Mutex::new(notable_cosign_requests_recv)),
heartbeat_requests: Mutex::new(heartbeat_requests_recv),
notable_cosign_requests: Mutex::new(notable_cosign_requests_recv),
inbound_request_responses: inbound_request_responses_send,
}
}))
}
}
impl tributary::P2p for Libp2p {
impl tributary_sdk::P2p for Libp2p {
fn broadcast(&self, tributary: [u8; 32], message: Vec<u8>) -> impl Send + Future<Output = ()> {
async move {
self
.0
.gossip
.send(Message::Tributary { tributary, message })
.expect("gossip recv channel was dropped?");
@@ -281,7 +268,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p {
let request = Request::NotableCosigns { global_session };
let peers = self.peers.peers.read().await.clone();
let peers = self.0.peers.peers.read().await.clone();
// HashSet of all peers
let peers = peers.into_values().flat_map(<_>::into_iter).collect::<HashSet<_>>();
// Vec of all peers
@@ -297,6 +284,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p {
let (sender, receiver) = oneshot::channel();
self
.0
.outbound_requests
.send((peer, request, sender))
.expect("outbound requests recv channel was dropped?");
@@ -310,6 +298,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p {
{
for cosign in cosigns {
self
.0
.signed_cosigns_send
.send(cosign)
.expect("signed_cosigns recv in this object was dropped?");
@@ -325,24 +314,31 @@ impl serai_cosign::RequestNotableCosigns for Libp2p {
impl serai_coordinator_p2p::P2p for Libp2p {
type Peer<'a> = Peer<'a>;
fn peers(&self, network: NetworkId) -> impl Send + Future<Output = Vec<Self::Peer<'_>>> {
fn peers(&self, network: ExternalNetworkId) -> impl Send + Future<Output = Vec<Self::Peer<'_>>> {
async move {
let Some(peer_ids) = self.peers.peers.read().await.get(&network).cloned() else {
let Some(peer_ids) = self.0.peers.peers.read().await.get(&network).cloned() else {
return vec![];
};
let mut res = vec![];
for id in peer_ids {
res.push(Peer { outbound_requests: &self.outbound_requests, id });
res.push(Peer { outbound_requests: &self.0.outbound_requests, id });
}
res
}
}
fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future<Output = ()> {
async move {
self.0.gossip.send(Message::Cosign(cosign)).expect("gossip recv channel was dropped?");
}
}
fn heartbeat(
&self,
) -> impl Send + Future<Output = (Heartbeat, oneshot::Sender<Vec<TributaryBlockWithCommit>>)> {
async move {
let (request_id, set, latest_block_hash) = self
.0
.heartbeat_requests
.lock()
.await
@@ -351,7 +347,7 @@ impl serai_coordinator_p2p::P2p for Libp2p {
.expect("heartbeat_requests_send was dropped?");
let (sender, receiver) = oneshot::channel();
tokio::spawn({
let respond = self.inbound_request_responses.clone();
let respond = self.0.inbound_request_responses.clone();
async move {
// The swarm task expects us to respond to every request. If the caller drops this
// channel, we'll receive `Err` and respond with `vec![]`, safely satisfying that bound
@@ -375,6 +371,7 @@ impl serai_coordinator_p2p::P2p for Libp2p {
) -> impl Send + Future<Output = ([u8; 32], oneshot::Sender<Vec<SignedCosign>>)> {
async move {
let (request_id, global_session) = self
.0
.notable_cosign_requests
.lock()
.await
@@ -383,7 +380,7 @@ impl serai_coordinator_p2p::P2p for Libp2p {
.expect("notable_cosign_requests_send was dropped?");
let (sender, receiver) = oneshot::channel();
tokio::spawn({
let respond = self.inbound_request_responses.clone();
let respond = self.0.inbound_request_responses.clone();
async move {
let response = if let Ok(notable_cosigns) = receiver.await {
Response::NotableCosigns(notable_cosigns)
@@ -401,13 +398,14 @@ impl serai_coordinator_p2p::P2p for Libp2p {
fn tributary_message(&self) -> impl Send + Future<Output = ([u8; 32], Vec<u8>)> {
async move {
self.tributary_gossip.lock().await.recv().await.expect("tributary_gossip send was dropped?")
self.0.tributary_gossip.lock().await.recv().await.expect("tributary_gossip send was dropped?")
}
}
fn cosign(&self) -> impl Send + Future<Output = SignedCosign> {
async move {
self
.0
.signed_cosigns
.lock()
.await

View File

@@ -1,6 +1,6 @@
use core::time::Duration;
use tributary::tendermint::LATENCY_TIME;
use tributary_sdk::tendermint::LATENCY_TIME;
use libp2p::ping::{self, Config, Behaviour};
pub use ping::Event;

View File

@@ -10,7 +10,7 @@ use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use libp2p::request_response::{
self, Codec as CodecTrait, Event as GenericEvent, Config, Behaviour, ProtocolSupport,
};
pub use request_response::{RequestId, Message};
pub use request_response::{InboundRequestId, Message};
use serai_cosign::SignedCosign;
@@ -19,7 +19,7 @@ use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit};
/// The maximum message size for the request-response protocol
// This is derived from the heartbeat message size as it's our largest message
pub(crate) const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize =
(tributary::BLOCK_SIZE_LIMIT * serai_coordinator_p2p::heartbeat::BLOCKS_PER_BATCH) + 1024;
1024 + serai_coordinator_p2p::heartbeat::BATCH_SIZE_LIMIT;
const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0";
@@ -129,7 +129,6 @@ pub(crate) type Event = GenericEvent<Request, Response>;
pub(crate) type Behavior = Behaviour<Codec>;
pub(crate) fn new_behavior() -> Behavior {
let mut config = Config::default();
config.set_request_timeout(Duration::from_secs(5));
let config = Config::default().with_request_timeout(Duration::from_secs(5));
Behavior::new([(PROTOCOL, ProtocolSupport::Full)], config)
}

View File

@@ -6,9 +6,9 @@ use std::{
use borsh::BorshDeserialize;
use serai_client::validator_sets::primitives::ValidatorSet;
use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use tokio::sync::{mpsc, RwLock};
use tokio::sync::{mpsc, oneshot, RwLock};
use serai_task::TaskHandle;
@@ -17,11 +17,11 @@ use serai_cosign::SignedCosign;
use futures_util::StreamExt;
use libp2p::{
identity::PeerId,
request_response::{RequestId, ResponseChannel},
request_response::{InboundRequestId, OutboundRequestId, ResponseChannel},
swarm::{dial_opts::DialOpts, SwarmEvent, Swarm},
};
use serai_coordinator_p2p::{oneshot, Heartbeat};
use serai_coordinator_p2p::Heartbeat;
use crate::{
Peers, BehaviorEvent, Behavior,
@@ -65,17 +65,12 @@ pub(crate) struct SwarmTask {
tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec<u8>)>,
outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender<Response>)>,
outbound_request_responses: HashMap<RequestId, oneshot::Sender<Response>>,
outbound_request_responses: HashMap<OutboundRequestId, oneshot::Sender<Response>>,
inbound_request_response_channels: HashMap<RequestId, ResponseChannel<Response>>,
heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>,
/* TODO
let cosigns = Cosigning::<D>::notable_cosigns(&self.db, global_session);
let res = reqres::Response::NotableCosigns(cosigns);
let _: Result<_, _> = self.swarm.behaviour_mut().reqres.send_response(channel, res);
*/
notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>,
inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>,
inbound_request_response_channels: HashMap<InboundRequestId, ResponseChannel<Response>>,
heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ExternalValidatorSet, [u8; 32])>,
notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>,
inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>,
}
impl SwarmTask {
@@ -97,7 +92,8 @@ impl SwarmTask {
}
}
gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}
gossip::Event::GossipsubNotSupported { peer_id } => {
gossip::Event::GossipsubNotSupported { peer_id } |
gossip::Event::SlowPeer { peer_id, .. } => {
let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id);
}
}
@@ -227,25 +223,21 @@ impl SwarmTask {
}
}
SwarmEvent::Behaviour(
BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event)
) => {
// Ensure these are unreachable cases, not actual events
let _: void::Void = event;
}
SwarmEvent::Behaviour(
BehaviorEvent::Ping(ping::Event { peer: _, connection, result, })
) => {
if result.is_err() {
self.swarm.close_connection(connection);
SwarmEvent::Behaviour(event) => {
match event {
BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event) => {
// This *is* an exhaustive match as these events are empty enums
match event {}
}
BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) => {
if result.is_err() {
self.swarm.close_connection(connection);
}
}
BehaviorEvent::Reqres(event) => self.handle_reqres(event),
BehaviorEvent::Gossip(event) => self.handle_gossip(event),
}
}
SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => {
self.handle_reqres(event)
}
SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => {
self.handle_gossip(event)
}
// We don't handle any of these
SwarmEvent::IncomingConnection { .. } |
@@ -255,7 +247,14 @@ impl SwarmTask {
SwarmEvent::ExpiredListenAddr { .. } |
SwarmEvent::ListenerClosed { .. } |
SwarmEvent::ListenerError { .. } |
SwarmEvent::Dialing { .. } => {}
SwarmEvent::Dialing { .. } |
SwarmEvent::NewExternalAddrCandidate { .. } |
SwarmEvent::ExternalAddrConfirmed { .. } |
SwarmEvent::ExternalAddrExpired { .. } |
SwarmEvent::NewExternalAddrOfPeer { .. } => {}
// Requires as SwarmEvent is non-exhaustive
_ => log::warn!("unhandled SwarmEvent: {event:?}"),
}
}
@@ -326,9 +325,9 @@ impl SwarmTask {
outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender<Response>)>,
heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>,
notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>,
inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>,
heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ExternalValidatorSet, [u8; 32])>,
notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>,
inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>,
) {
tokio::spawn(
SwarmTask {

View File

@@ -4,7 +4,8 @@ use std::{
collections::{HashSet, HashMap},
};
use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai};
use serai_client_serai::abi::primitives::{network_id::ExternalNetworkId, validator_sets::Session};
use serai_client_serai::{RpcError, Serai};
use serai_task::{Task, ContinuallyRan};
@@ -21,21 +22,21 @@ pub(crate) struct Changes {
}
pub(crate) struct Validators {
serai: Serai,
serai: Arc<Serai>,
// A cache for which session we're populated with the validators of
sessions: HashMap<NetworkId, Session>,
sessions: HashMap<ExternalNetworkId, Session>,
// The validators by network
by_network: HashMap<NetworkId, HashSet<PeerId>>,
by_network: HashMap<ExternalNetworkId, HashSet<PeerId>>,
// The validators and their networks
validators: HashMap<PeerId, HashSet<NetworkId>>,
validators: HashMap<PeerId, HashSet<ExternalNetworkId>>,
// The channel to send the changes down
changes: mpsc::UnboundedSender<Changes>,
}
impl Validators {
pub(crate) fn new(serai: Serai) -> (Self, mpsc::UnboundedReceiver<Changes>) {
pub(crate) fn new(serai: Arc<Serai>) -> (Self, mpsc::UnboundedReceiver<Changes>) {
let (send, recv) = mpsc::unbounded_channel();
let validators = Validators {
serai,
@@ -49,39 +50,47 @@ impl Validators {
async fn session_changes(
serai: impl Borrow<Serai>,
sessions: impl Borrow<HashMap<NetworkId, Session>>,
) -> Result<Vec<(NetworkId, Session, HashSet<PeerId>)>, String> {
let temporal_serai =
serai.borrow().as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?;
let temporal_serai = temporal_serai.validator_sets();
sessions: impl Borrow<HashMap<ExternalNetworkId, Session>>,
) -> Result<Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>, RpcError> {
/*
This uses the latest finalized block, not the latest cosigned block, which should be fine as
in the worst case, we'd connect to unexpected validators. They still shouldn't be able to
bypass the cosign protocol unless a historical global session was malicious, in which case
the cosign protocol already breaks.
Besides, we can't connect to historical validators, only the current validators.
*/
let serai = serai.borrow().state().await?;
let mut session_changes = vec![];
{
// FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but
// we poll it till it yields all futures with the most minimal processing possible
let mut futures = FuturesUnordered::new();
for network in serai_client::primitives::NETWORKS {
if network == NetworkId::Serai {
continue;
}
for network in ExternalNetworkId::all() {
let sessions = sessions.borrow();
let serai = serai.borrow();
futures.push(async move {
let session = match temporal_serai.session(network).await {
let session = match serai.current_session(network.into()).await {
Ok(Some(session)) => session,
Ok(None) => return Ok(None),
Err(e) => return Err(format!("{e:?}")),
Err(e) => return Err(e),
};
if sessions.get(&network) == Some(&session) {
Ok(None)
} else {
match temporal_serai.active_network_validators(network).await {
Ok(validators) => Ok(Some((
match serai.current_validators(network.into()).await {
Ok(Some(validators)) => Ok(Some((
network,
session,
validators.into_iter().map(peer_id_from_public).collect(),
validators
.into_iter()
.map(|validator| peer_id_from_public(validator.into()))
.collect(),
))),
Err(e) => Err(format!("{e:?}")),
Ok(None) => panic!("network has session yet no validators"),
Err(e) => Err(e),
}
}
});
@@ -98,7 +107,7 @@ impl Validators {
fn incorporate_session_changes(
&mut self,
session_changes: Vec<(NetworkId, Session, HashSet<PeerId>)>,
session_changes: Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>,
) {
let mut removed = HashSet::new();
let mut added = HashSet::new();
@@ -147,17 +156,17 @@ impl Validators {
}
/// Update the view of the validators.
pub(crate) async fn update(&mut self) -> Result<(), String> {
let session_changes = Self::session_changes(&self.serai, &self.sessions).await?;
pub(crate) async fn update(&mut self) -> Result<(), RpcError> {
let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?;
self.incorporate_session_changes(session_changes);
Ok(())
}
pub(crate) fn by_network(&self) -> &HashMap<NetworkId, HashSet<PeerId>> {
pub(crate) fn by_network(&self) -> &HashMap<ExternalNetworkId, HashSet<PeerId>> {
&self.by_network
}
pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet<NetworkId>> {
pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet<ExternalNetworkId>> {
self.validators.get(peer_id)
}
}
@@ -174,7 +183,9 @@ impl UpdateValidatorsTask {
/// Spawn a new instance of the UpdateValidatorsTask.
///
/// This returns a reference to the Validators it updates after spawning itself.
pub(crate) fn spawn(serai: Serai) -> (Arc<RwLock<Validators>>, mpsc::UnboundedReceiver<Changes>) {
pub(crate) fn spawn(
serai: Arc<Serai>,
) -> (Arc<RwLock<Validators>>, mpsc::UnboundedReceiver<Changes>) {
// The validators which will be updated
let (validators, changes) = Validators::new(serai);
let validators = Arc::new(RwLock::new(validators));
@@ -198,13 +209,13 @@ impl ContinuallyRan for UpdateValidatorsTask {
const DELAY_BETWEEN_ITERATIONS: u64 = 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let session_changes = {
let validators = self.validators.read().await;
Validators::session_changes(validators.serai.clone(), validators.sessions.clone())
.await
.map_err(|e| format!("{e:?}"))?
Validators::session_changes(validators.serai.clone(), validators.sessions.clone()).await?
};
self.validators.write().await.incorporate_session_changes(session_changes);
Ok(true)

View File

@@ -1,11 +1,11 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_client::validator_sets::primitives::ValidatorSet;
use serai_primitives::validator_sets::{ExternalValidatorSet, KeyShares};
use futures_lite::FutureExt;
use tributary::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader};
use tributary_sdk::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader};
use serai_db::*;
use serai_task::ContinuallyRan;
@@ -13,25 +13,41 @@ use serai_task::ContinuallyRan;
use crate::{Heartbeat, Peer, P2p};
// Amount of blocks in a minute
const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize;
const BLOCKS_PER_MINUTE: usize =
(60 / (tributary_sdk::tendermint::TARGET_BLOCK_TIME / 1000)) as usize;
/// The maximum amount of blocks to include/included within a batch.
pub const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1;
/// The minimum amount of blocks to include/included within a batch, assuming there's blocks to
/// include in the batch.
///
/// This decides the size limit of the Batch (the Block size limit multiplied by the minimum amount
/// of blocks we'll send). The actual amount of blocks sent will be the amount which fits within
/// the size limit.
pub const MIN_BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1;
/// The size limit for a batch of blocks sent in response to a Heartbeat.
///
/// This estimates the size of a commit as `32 + (MAX_VALIDATORS * 128)`. At the time of writing, a
/// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators,
/// and aggregate signature). Accordingly, this should be a safe over-estimate.
pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH *
(tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((KeyShares::MAX_PER_SET as usize) * 128));
/// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's
/// tip.
///
/// If the other validator has more blocks then we do, they're expected to inform us. This forms
/// the sync protocol for our Tributaries.
pub struct HeartbeatTask<TD: Db, Tx: TransactionTrait, P: P2p> {
set: ValidatorSet,
tributary: Tributary<TD, Tx, P>,
reader: TributaryReader<TD, Tx>,
p2p: P,
pub(crate) struct HeartbeatTask<TD: Db, Tx: TransactionTrait, P: P2p> {
pub(crate) set: ExternalValidatorSet,
pub(crate) tributary: Tributary<TD, Tx, P>,
pub(crate) reader: TributaryReader<TD, Tx>,
pub(crate) p2p: P,
}
impl<TD: Db, Tx: TransactionTrait, P: P2p> ContinuallyRan for HeartbeatTask<TD, Tx, P> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
// If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol
const TIME_TO_TRIGGER_SYNCING: Duration = Duration::from_secs(60);
@@ -80,7 +96,7 @@ impl<TD: Db, Tx: TransactionTrait, P: P2p> ContinuallyRan for HeartbeatTask<TD,
// This is the final batch if it has less than the maximum amount of blocks
// (signifying there weren't more blocks after this to fill the batch with)
let final_batch = blocks.len() < BLOCKS_PER_BATCH;
let final_batch = blocks.len() < MIN_BLOCKS_PER_BATCH;
// Sync each block
for block_with_commit in blocks {

View File

@@ -1,26 +1,31 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::future::Future;
use std::collections::HashMap;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet};
use serai_primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet};
use serai_cosign::SignedCosign;
use serai_db::Db;
use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader};
use serai_cosign::{SignedCosign, Cosigning};
/// A oneshot channel.
pub mod oneshot;
use tokio::sync::{mpsc, oneshot};
use serai_task::{Task, ContinuallyRan};
/// The heartbeat task, effecting sync of Tributaries
pub mod heartbeat;
use crate::heartbeat::HeartbeatTask;
/// A heartbeat for a Tributary.
#[derive(Clone, Copy, BorshSerialize, BorshDeserialize, Debug)]
pub struct Heartbeat {
/// The Tributary this is the heartbeat of.
pub set: ValidatorSet,
pub set: ExternalValidatorSet,
/// The hash of the latest block added to the Tributary.
pub latest_block_hash: [u8; 32],
}
@@ -44,17 +49,23 @@ pub trait Peer<'a>: Send {
}
/// The representation of the P2P network.
pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotableCosigns {
pub trait P2p:
Send + Sync + Clone + tributary_sdk::P2p + serai_cosign::RequestNotableCosigns
{
/// The representation of a peer.
type Peer<'a>: Peer<'a>;
/// Fetch the peers for this network.
fn peers(&self, network: NetworkId) -> impl Send + Future<Output = Vec<Self::Peer<'_>>>;
fn peers(&self, network: ExternalNetworkId) -> impl Send + Future<Output = Vec<Self::Peer<'_>>>;
/// Broadcast a cosign.
fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future<Output = ()>;
/// A cancel-safe future for the next heartbeat received over the P2P network.
///
/// Yields the validator set its for, the latest block hash observed, and a channel to return the
/// descending blocks.
/// descending blocks. This channel MUST NOT and will not have its receiver dropped before a
/// message is sent.
fn heartbeat(
&self,
) -> impl Send + Future<Output = (Heartbeat, oneshot::Sender<Vec<TributaryBlockWithCommit>>)>;
@@ -62,6 +73,7 @@ pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotab
/// A cancel-safe future for the next request for the notable cosigns of a gloabl session.
///
/// Yields the global session the request is for and a channel to return the notable cosigns.
/// This channel MUST NOT and will not have its receiver dropped before a message is sent.
fn notable_cosigns_request(
&self,
) -> impl Send + Future<Output = ([u8; 32], oneshot::Sender<Vec<SignedCosign>>)>;
@@ -74,3 +86,119 @@ pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotab
/// A cancel-safe future for the next cosign received.
fn cosign(&self) -> impl Send + Future<Output = SignedCosign>;
}
fn handle_notable_cosigns_request<D: Db>(
db: &D,
global_session: [u8; 32],
channel: oneshot::Sender<Vec<SignedCosign>>,
) {
let cosigns = Cosigning::<D>::notable_cosigns(db, global_session);
channel.send(cosigns).expect("channel listening for cosign oneshot response was dropped?");
}
fn handle_heartbeat<D: Db, T: TransactionTrait>(
reader: &TributaryReader<D, T>,
mut latest_block_hash: [u8; 32],
channel: oneshot::Sender<Vec<TributaryBlockWithCommit>>,
) {
let mut res_size = 8;
let mut res = vec![];
// This former case should be covered by this latter case
while (res.len() < heartbeat::MIN_BLOCKS_PER_BATCH) || (res_size < heartbeat::BATCH_SIZE_LIMIT) {
let Some(block_after) = reader.block_after(&latest_block_hash) else { break };
// These `break` conditions should only occur under edge cases, such as if we're actively
// deleting this Tributary due to being done with it
let Some(block) = reader.block(&block_after) else { break };
let block = block.serialize();
let Some(commit) = reader.commit(&block_after) else { break };
res_size += 8 + block.len() + 8 + commit.len();
res.push(TributaryBlockWithCommit { block, commit });
latest_block_hash = block_after;
}
channel
.send(res)
.map_err(|_| ())
.expect("channel listening for heartbeat oneshot response was dropped?");
}
/// Run the P2P instance.
///
/// `add_tributary`'s and `retire_tributary's senders, along with `send_cosigns`'s receiver, must
/// never be dropped. `retire_tributary` is not required to only be instructed with added
/// Tributaries.
pub async fn run<TD: Db, Tx: TransactionTrait, P: P2p>(
db: impl Db,
p2p: P,
mut add_tributary: mpsc::UnboundedReceiver<(ExternalValidatorSet, Tributary<TD, Tx, P>)>,
mut retire_tributary: mpsc::UnboundedReceiver<ExternalValidatorSet>,
send_cosigns: mpsc::UnboundedSender<SignedCosign>,
) {
let mut readers = HashMap::<ExternalValidatorSet, TributaryReader<TD, Tx>>::new();
let mut tributaries = HashMap::<[u8; 32], mpsc::UnboundedSender<Vec<u8>>>::new();
let mut heartbeat_tasks = HashMap::<ExternalValidatorSet, _>::new();
loop {
tokio::select! {
tributary = add_tributary.recv() => {
let (set, tributary) = tributary.expect("add_tributary send was dropped");
let reader = tributary.reader();
readers.insert(set, reader.clone());
let (heartbeat_task_def, heartbeat_task) = Task::new();
tokio::spawn(
(HeartbeatTask {
set,
tributary: tributary.clone(),
reader: reader.clone(),
p2p: p2p.clone(),
}).continually_run(heartbeat_task_def, vec![])
);
heartbeat_tasks.insert(set, heartbeat_task);
let (tributary_message_send, mut tributary_message_recv) = mpsc::unbounded_channel();
tributaries.insert(tributary.genesis(), tributary_message_send);
// For as long as this sender exists, handle the messages from it on a dedicated task
tokio::spawn(async move {
while let Some(message) = tributary_message_recv.recv().await {
tributary.handle_message(&message).await;
}
});
}
set = retire_tributary.recv() => {
let set = set.expect("retire_tributary send was dropped");
let Some(reader) = readers.remove(&set) else { continue };
tributaries.remove(&reader.genesis()).expect("tributary reader but no tributary");
heartbeat_tasks.remove(&set).expect("tributary but no heartbeat task");
}
(heartbeat, channel) = p2p.heartbeat() => {
if let Some(reader) = readers.get(&heartbeat.set) {
let reader = reader.clone(); // This is a cheap clone
// We spawn this on a task due to the DB reads needed
tokio::spawn(async move {
handle_heartbeat(&reader, heartbeat.latest_block_hash, channel)
});
}
}
(global_session, channel) = p2p.notable_cosigns_request() => {
tokio::spawn({
let db = db.clone();
async move { handle_notable_cosigns_request(&db, global_session, channel) }
});
}
(tributary, message) = p2p.tributary_message() => {
if let Some(tributary) = tributaries.get(&tributary) {
tributary.send(message).expect("tributary message recv was dropped?");
}
}
cosign = p2p.cosign() => {
// We don't call `Cosigning::intake_cosign` here as that can only be called from a single
// location. We also need to intake the cosigns we produce, which means we need to merge
// these streams (signing, network) somehow. That's done with this mpsc channel
send_cosigns.send(cosign).expect("channel receiving cosigns was dropped");
}
}
}
}

View File

@@ -1,35 +0,0 @@
use core::{
pin::Pin,
task::{Poll, Context},
future::Future,
};
pub use async_channel::{SendError, RecvError};
/// The sender for a oneshot channel.
pub struct Sender<T: Send>(async_channel::Sender<T>);
impl<T: Send> Sender<T> {
/// Send a value down the channel.
///
/// Returns an error if the channel's receiver was dropped.
pub fn send(self, msg: T) -> Result<(), SendError<T>> {
self.0.send_blocking(msg)
}
}
/// The receiver for a oneshot channel.
pub struct Receiver<T: Send>(async_channel::Receiver<T>);
impl<T: Send> Future for Receiver<T> {
type Output = Result<T, RecvError>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let recv = self.0.recv();
futures_lite::pin!(recv);
recv.poll(cx)
}
}
/// Create a new oneshot channel.
pub fn channel<T: Send>() -> (Sender<T>, Receiver<T>) {
let (send, recv) = async_channel::bounded(1);
(Sender(send), Receiver(recv))
}

150
coordinator/src/db.rs Normal file
View File

@@ -0,0 +1,150 @@
use std::{path::Path, fs};
pub(crate) use serai_db::{Get, DbTxn, Db as DbTrait};
use serai_db::{create_db, db_channel};
use dkg::Participant;
use serai_client_serai::abi::primitives::{
crypto::KeyPair,
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
};
use serai_cosign::SignedCosign;
use serai_coordinator_substrate::NewSetInformation;
use serai_coordinator_tributary::Transaction;
#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))]
pub(crate) type Db = std::sync::Arc<serai_db::ParityDb>;
#[cfg(feature = "rocksdb")]
pub(crate) type Db = serai_db::RocksDB;
#[allow(unused_variables, unreachable_code)]
fn db(path: &str) -> Db {
{
let path: &Path = path.as_ref();
// This may error if this path already exists, which we shouldn't propagate/panic on. If this
// is a problem (such as we don't have the necessary permissions to write to this path), we
// expect the following DB opening to error.
let _: Result<_, _> = fs::create_dir_all(path.parent().unwrap());
}
#[cfg(all(feature = "parity-db", feature = "rocksdb"))]
panic!("built with parity-db and rocksdb");
#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))]
let db = serai_db::new_parity_db(path);
#[cfg(feature = "rocksdb")]
let db = serai_db::new_rocksdb(path);
db
}
pub(crate) fn coordinator_db() -> Db {
let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified");
db(&format!("{root_path}/coordinator/db"))
}
fn tributary_db_folder(set: ExternalValidatorSet) -> String {
let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified");
let network = match set.network {
ExternalNetworkId::Bitcoin => "Bitcoin",
ExternalNetworkId::Ethereum => "Ethereum",
ExternalNetworkId::Monero => "Monero",
};
format!("{root_path}/tributary-{network}-{}", set.session.0)
}
pub(crate) fn tributary_db(set: ExternalValidatorSet) -> Db {
db(&format!("{}/db", tributary_db_folder(set)))
}
pub(crate) fn prune_tributary_db(set: ExternalValidatorSet) {
log::info!("pruning data directory for tributary {set:?}");
let db = tributary_db_folder(set);
if fs::exists(&db).expect("couldn't check if tributary DB exists") {
fs::remove_dir_all(db).unwrap();
}
}
create_db! {
Coordinator {
// The currently active Tributaries
ActiveTributaries: () -> Vec<NewSetInformation>,
// The latest Tributary to have been retired for a network
// Since Tributaries are retired sequentially, this is informative to if any Tributary has been
// retired
RetiredTributary: (network: ExternalNetworkId) -> Session,
// The last handled message from a Processor
LastProcessorMessage: (network: ExternalNetworkId) -> u64,
// Cosigns we produced and tried to intake yet incurred an error while doing so
ErroneousCosigns: () -> Vec<SignedCosign>,
// The keys to confirm and set on the Serai network
KeysToConfirm: (set: ExternalValidatorSet) -> KeyPair,
// The key was set on the Serai network
KeySet: (set: ExternalValidatorSet) -> (),
}
}
db_channel! {
Coordinator {
// Cosigns we produced
SignedCosigns: () -> SignedCosign,
// Tributaries to clean up upon reboot
TributaryCleanup: () -> ExternalValidatorSet,
}
}
mod _internal_db {
use super::*;
db_channel! {
Coordinator {
// Tributary transactions to publish from the Processor messages
TributaryTransactionsFromProcessorMessages: (set: ExternalValidatorSet) -> Transaction,
// Tributary transactions to publish from the DKG confirmation task
TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction,
// Participants to remove
RemoveParticipant: (set: ExternalValidatorSet) -> u16,
}
}
}
pub(crate) struct TributaryTransactionsFromProcessorMessages;
impl TributaryTransactionsFromProcessorMessages {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, tx: &Transaction) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::TributaryTransactionsFromProcessorMessages::send(txn, set, tx);
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Transaction> {
_internal_db::TributaryTransactionsFromProcessorMessages::try_recv(txn, set)
}
}
pub(crate) struct TributaryTransactionsFromDkgConfirmation;
impl TributaryTransactionsFromDkgConfirmation {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, tx: &Transaction) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::TributaryTransactionsFromDkgConfirmation::send(txn, set, tx);
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Transaction> {
_internal_db::TributaryTransactionsFromDkgConfirmation::try_recv(txn, set)
}
}
pub(crate) struct RemoveParticipant;
impl RemoveParticipant {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::RemoveParticipant::send(txn, set, &u16::from(participant));
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> {
_internal_db::RemoveParticipant::try_recv(txn, set)
.map(|i| Participant::new(i).expect("sent invalid participant index for removal"))
}
}

View File

@@ -0,0 +1,441 @@
use core::{ops::Deref, future::Future};
use std::{boxed::Box, collections::HashMap};
use zeroize::Zeroizing;
use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, *};
use dkg::{Participant, musig};
use frost_schnorrkel::{
frost::{curve::Ristretto, FrostError, sign::*},
Schnorrkel,
};
use serai_db::{DbTxn, Db as DbTrait};
#[rustfmt::skip]
use serai_client_serai::abi::primitives::{validator_sets::ExternalValidatorSet, address::SeraiAddress};
use serai_task::{DoesNotError, ContinuallyRan};
use serai_coordinator_substrate::{NewSetInformation, Keys};
use serai_coordinator_tributary::{Transaction, DkgConfirmationMessages};
use crate::{KeysToConfirm, KeySet, TributaryTransactionsFromDkgConfirmation};
fn schnorrkel() -> Schnorrkel {
Schnorrkel::new(b"substrate") // TODO: Pull the constant for this
}
fn our_i(
set: &NewSetInformation,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
data: &HashMap<Participant, Vec<u8>>,
) -> Participant {
let public = SeraiAddress((Ristretto::generator() * key.deref()).to_bytes());
let mut our_i = None;
for participant in data.keys() {
let validator_index = usize::from(u16::from(*participant) - 1);
let (validator, _weight) = set.validators[validator_index];
if validator == public {
our_i = Some(*participant);
}
}
our_i.unwrap()
}
// Take a HashMap of participations with non-contiguous Participants and convert them to a
// contiguous sequence.
//
// The input data is expected to not include our own data, which also won't be in the output data.
//
// Returns the mapping from the contiguous Participants to the original Participants.
fn make_contiguous<T>(
our_i: Participant,
mut data: HashMap<Participant, Vec<u8>>,
transform: impl Fn(Vec<u8>) -> std::io::Result<T>,
) -> Result<HashMap<Participant, T>, Participant> {
assert!(!data.contains_key(&our_i));
let mut ordered_participants = data.keys().copied().collect::<Vec<_>>();
ordered_participants.sort_by_key(|participant| u16::from(*participant));
let mut our_i = Some(our_i);
let mut contiguous = HashMap::new();
let mut i = 1;
for participant in ordered_participants {
// If this is the first participant after our own index, increment to account for our index
if let Some(our_i_value) = our_i {
if u16::from(participant) > u16::from(our_i_value) {
i += 1;
our_i = None;
}
}
let contiguous_index = Participant::new(i).unwrap();
let data = match transform(data.remove(&participant).unwrap()) {
Ok(data) => data,
Err(_) => Err(participant)?,
};
contiguous.insert(contiguous_index, data);
i += 1;
}
Ok(contiguous)
}
fn handle_frost_error<T>(result: Result<T, FrostError>) -> Result<T, Participant> {
match &result {
Ok(_) => Ok(result.unwrap()),
Err(FrostError::InvalidPreprocess(participant) | FrostError::InvalidShare(participant)) => {
Err(*participant)
}
// All of these should be unreachable
Err(
FrostError::InternalError(_) |
FrostError::InvalidParticipant(_, _) |
FrostError::InvalidSigningSet(_) |
FrostError::InvalidParticipantQuantity(_, _) |
FrostError::DuplicatedParticipant(_) |
FrostError::MissingParticipant(_),
) => {
result.unwrap();
unreachable!("continued execution after unwrapping Result::Err");
}
}
}
#[rustfmt::skip]
enum Signer {
Preprocess { attempt: u32, seed: CachedPreprocess, preprocess: [u8; 64] },
Share {
attempt: u32,
musig_validators: Vec<SeraiAddress>,
share: [u8; 32],
machine: Box<AlgorithmSignatureMachine<Ristretto, Schnorrkel>>,
},
}
/// Performs the DKG Confirmation protocol.
pub(crate) struct ConfirmDkgTask<CD: DbTrait, TD: DbTrait> {
db: CD,
set: NewSetInformation,
tributary_db: TD,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
signer: Option<Signer>,
}
impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
pub(crate) fn new(
db: CD,
set: NewSetInformation,
tributary_db: TD,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
) -> Self {
Self { db, set, tributary_db, key, signer: None }
}
fn slash(db: &mut CD, set: ExternalValidatorSet, validator: SeraiAddress) {
let mut txn = db.txn();
TributaryTransactionsFromDkgConfirmation::send(
&mut txn,
set,
&Transaction::RemoveParticipant { participant: validator, signed: Default::default() },
);
txn.commit();
}
fn preprocess(
db: &mut CD,
set: ExternalValidatorSet,
attempt: u32,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
signer: &mut Option<Signer>,
) {
// Perform the preprocess
let public_key = Ristretto::generator() * key.deref();
let (machine, preprocess) = AlgorithmMachine::new(
schnorrkel(),
// We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet
musig(ExternalValidatorSet::musig_context(&set), key, &[public_key]).unwrap(),
)
.preprocess(&mut OsRng);
// We take the preprocess so we can use it in a distinct machine with the actual Musig
// parameters
let seed = machine.cache();
let mut preprocess_bytes = [0u8; 64];
preprocess_bytes.copy_from_slice(&preprocess.serialize());
let preprocess = preprocess_bytes;
let mut txn = db.txn();
// If this attempt has already been preprocessed for, the Tributary will de-duplicate it
// This may mean the Tributary preprocess is distinct from ours, but we check for that later
TributaryTransactionsFromDkgConfirmation::send(
&mut txn,
set,
&Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed: Default::default() },
);
txn.commit();
*signer = Some(Signer::Preprocess { attempt, seed, preprocess });
}
}
impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
type Error = DoesNotError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
// If we were sent a key to set, create the signer for it
if self.signer.is_none() && KeysToConfirm::get(&self.db, self.set.set).is_some() {
// Create and publish the initial preprocess
Self::preprocess(&mut self.db, self.set.set, 0, self.key.clone(), &mut self.signer);
made_progress = true;
}
// If we have keys to confirm, handle all messages from the tributary
if let Some(key_pair) = KeysToConfirm::get(&self.db, self.set.set) {
// Handle all messages from the Tributary
loop {
let mut tributary_txn = self.tributary_db.txn();
let Some(msg) = DkgConfirmationMessages::try_recv(&mut tributary_txn, self.set.set)
else {
break;
};
match msg {
messages::sign::CoordinatorMessage::Reattempt {
id: messages::sign::SignId { attempt, .. },
} => {
// Create and publish the preprocess for the specified attempt
Self::preprocess(
&mut self.db,
self.set.set,
attempt,
self.key.clone(),
&mut self.signer,
);
}
messages::sign::CoordinatorMessage::Preprocesses {
id: messages::sign::SignId { attempt, .. },
mut preprocesses,
} => {
// Confirm the preprocess we're expected to sign with is the one we locally have
// It may be different if we rebooted and made a second preprocess for this attempt
let Some(Signer::Preprocess { attempt: our_attempt, seed, preprocess }) =
self.signer.take()
else {
// If this message is not expected, commit the txn to drop it and move on
// At some point, we'll get a Reattempt and reset
tributary_txn.commit();
break;
};
// Determine the MuSig key signed with
let musig_validators = {
let mut ordered_participants = preprocesses.keys().copied().collect::<Vec<_>>();
ordered_participants.sort_by_key(|participant| u16::from(*participant));
let mut res = vec![];
for participant in ordered_participants {
let (validator, _weight) =
self.set.validators[usize::from(u16::from(participant) - 1)];
res.push(validator);
}
res
};
let musig_public_keys = musig_validators
.iter()
.map(|key| {
Ristretto::read_G(&mut key.0.as_slice())
.expect("Serai validator had invalid public key")
})
.collect::<Vec<_>>();
let keys = musig(
ExternalValidatorSet::musig_context(&self.set.set),
self.key.clone(),
&musig_public_keys,
)
.unwrap();
// Rebuild the machine
let (machine, preprocess_from_cache) =
AlgorithmSignMachine::from_cache(schnorrkel(), keys, seed);
assert_eq!(preprocess.as_slice(), preprocess_from_cache.serialize().as_slice());
// Ensure this is a consistent signing session
let our_i = our_i(&self.set, &self.key, &preprocesses);
let consistent = (attempt == our_attempt) &&
(preprocesses.remove(&our_i).unwrap().as_slice() == preprocess.as_slice());
if !consistent {
tributary_txn.commit();
break;
}
// Reformat the preprocesses into the expected format for Musig
let preprocesses = match make_contiguous(our_i, preprocesses, |preprocess| {
machine.read_preprocess(&mut preprocess.as_slice())
}) {
Ok(preprocesses) => preprocesses,
// This yields the *original participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
self.set.validators[usize::from(u16::from(participant) - 1)].0,
);
tributary_txn.commit();
break;
}
};
// Calculate our share
let (machine, share) = match handle_frost_error(machine.sign(
preprocesses,
&ExternalValidatorSet::set_keys_message(&self.set.set, &key_pair),
)) {
Ok((machine, share)) => (machine, share),
// This yields the *musig participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
musig_validators[usize::from(u16::from(participant) - 1)],
);
tributary_txn.commit();
break;
}
};
// Send our share
let share = <[u8; 32]>::try_from(share.serialize()).unwrap();
let mut txn = self.db.txn();
TributaryTransactionsFromDkgConfirmation::send(
&mut txn,
self.set.set,
&Transaction::DkgConfirmationShare { attempt, share, signed: Default::default() },
);
txn.commit();
self.signer = Some(Signer::Share {
attempt,
musig_validators,
share,
machine: Box::new(machine),
});
}
messages::sign::CoordinatorMessage::Shares {
id: messages::sign::SignId { attempt, .. },
mut shares,
} => {
let Some(Signer::Share { attempt: our_attempt, musig_validators, share, machine }) =
self.signer.take()
else {
tributary_txn.commit();
break;
};
// Ensure this is a consistent signing session
let our_i = our_i(&self.set, &self.key, &shares);
let consistent = (attempt == our_attempt) &&
(shares.remove(&our_i).unwrap().as_slice() == share.as_slice());
if !consistent {
tributary_txn.commit();
break;
}
// Reformat the shares into the expected format for Musig
let shares = match make_contiguous(our_i, shares, |share| {
machine.read_share(&mut share.as_slice())
}) {
Ok(shares) => shares,
// This yields the *original participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
self.set.validators[usize::from(u16::from(participant) - 1)].0,
);
tributary_txn.commit();
break;
}
};
match handle_frost_error(machine.complete(shares)) {
Ok(signature) => {
// Create the bitvec of the participants
let mut signature_participants;
{
use bitvec::prelude::*;
signature_participants = bitvec![u8, Lsb0; 0; 0];
let mut i = 0;
for (validator, _) in &self.set.validators {
if Some(validator) == musig_validators.get(i) {
signature_participants.push(true);
i += 1;
} else {
signature_participants.push(false);
}
}
}
// This is safe to call multiple times as it'll just change which *valid*
// signature to publish
let mut txn = self.db.txn();
Keys::set(
&mut txn,
self.set.set,
key_pair.clone(),
signature_participants,
signature.into(),
);
txn.commit();
}
// This yields the *musig participant index*
Err(participant) => {
Self::slash(
&mut self.db,
self.set.set,
musig_validators[usize::from(u16::from(participant) - 1)],
);
tributary_txn.commit();
break;
}
}
}
}
// Because we successfully handled this message, note we made proress
made_progress = true;
tributary_txn.commit();
}
}
// Check if the key has been set on Serai
if KeysToConfirm::get(&self.db, self.set.set).is_some() &&
KeySet::get(&self.db, self.set.set).is_some()
{
// Take the keys to confirm so we never instantiate the signer again
let mut txn = self.db.txn();
KeysToConfirm::take(&mut txn, self.set.set);
KeySet::take(&mut txn, self.set.set);
txn.commit();
// Drop our own signer
// The task won't die until the Tributary does, but now it'll never do anything again
self.signer = None;
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -1,10 +1,516 @@
use core::{ops::Deref, time::Duration};
use std::{sync::Arc, collections::HashMap, time::Instant};
use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::{
group::{ff::PrimeField, GroupEncoding},
*,
};
use borsh::BorshDeserialize;
use tokio::sync::mpsc;
use serai_client_serai::{
abi::primitives::{
BlockHash,
crypto::{Public, Signature, ExternalKey, KeyPair},
network_id::ExternalNetworkId,
validator_sets::ExternalValidatorSet,
address::SeraiAddress,
},
Serai,
};
use message_queue::{Service, client::MessageQueue};
use serai_task::{Task, TaskHandle, ContinuallyRan};
use serai_cosign::{Faulted, SignedCosign, Cosigning};
use serai_coordinator_substrate::{
CanonicalEventStream, EphemeralEventStream, SignSlashReport, SetKeysTask, SignedBatches,
PublishBatchTask, SlashReports, PublishSlashReportTask,
};
use serai_coordinator_tributary::{SigningProtocolRound, Signed, Transaction, SubstrateBlockPlans};
mod db;
use db::*;
mod tributary;
mod dkg_confirmation;
mod substrate;
use substrate::SubstrateTask;
mod p2p {
use serai_coordinator_p2p::*;
pub use serai_coordinator_p2p::*;
pub use serai_coordinator_libp2p_p2p::Libp2p;
}
fn main() {
todo!("TODO")
// Use a zeroizing allocator for this entire application
// While secrets should already be zeroized, the presence of secret keys in a networked application
// (at increased risk of OOB reads) justifies the performance hit in case any secrets weren't
// already
#[global_allocator]
static ALLOCATOR: zalloc::ZeroizingAlloc<std::alloc::System> =
zalloc::ZeroizingAlloc(std::alloc::System);
async fn serai() -> Arc<Serai> {
const SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(10);
const MAX_SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(300);
let mut delay = SERAI_CONNECTION_DELAY;
loop {
let Ok(serai) = Serai::new(format!(
"http://{}:9944",
serai_env::var("SERAI_HOSTNAME").expect("Serai hostname wasn't provided")
)) else {
log::error!("couldn't connect to the Serai node");
tokio::time::sleep(delay).await;
delay = (delay + SERAI_CONNECTION_DELAY).min(MAX_SERAI_CONNECTION_DELAY);
continue;
};
log::info!("made initial connection to Serai node");
return Arc::new(serai);
}
}
fn spawn_cosigning<D: serai_db::Db>(
mut db: D,
serai: Arc<Serai>,
p2p: impl p2p::P2p,
tasks_to_run_upon_cosigning: Vec<TaskHandle>,
mut p2p_cosigns: mpsc::UnboundedReceiver<SignedCosign>,
) {
let mut cosigning = Cosigning::spawn(db.clone(), serai, p2p.clone(), tasks_to_run_upon_cosigning);
tokio::spawn(async move {
const COSIGN_LOOP_INTERVAL: Duration = Duration::from_secs(5);
let last_cosign_rebroadcast = Instant::now();
loop {
// Intake our own cosigns
match Cosigning::<D>::latest_cosigned_block_number(&db) {
Ok(latest_cosigned_block_number) => {
let mut txn = db.txn();
// The cosigns we prior tried to intake yet failed to
let mut cosigns = ErroneousCosigns::get(&txn).unwrap_or(vec![]);
// The cosigns we have yet to intake
while let Some(cosign) = SignedCosigns::try_recv(&mut txn) {
cosigns.push(cosign);
}
let mut erroneous = vec![];
for cosign in cosigns {
// If this cosign is stale, move on
if cosign.cosign.block_number <= latest_cosigned_block_number {
continue;
}
match cosigning.intake_cosign(&cosign) {
// Publish this cosign
Ok(()) => p2p.publish_cosign(cosign).await,
Err(e) => {
assert!(e.temporal(), "signed an invalid cosign: {e:?}");
// Since this had a temporal error, queue it to try again later
erroneous.push(cosign);
}
};
}
// Save the cosigns with temporal errors to the database
ErroneousCosigns::set(&mut txn, &erroneous);
txn.commit();
}
Err(Faulted) => {
// We don't panic here as the following code rebroadcasts our cosigns which is
// necessary to inform other coordinators of the faulty cosigns
log::error!("cosigning faulted");
}
}
let time_till_cosign_rebroadcast = (last_cosign_rebroadcast +
serai_cosign::BROADCAST_FREQUENCY)
.saturating_duration_since(Instant::now());
tokio::select! {
() = tokio::time::sleep(time_till_cosign_rebroadcast) => {
for cosign in cosigning.cosigns_to_rebroadcast() {
p2p.publish_cosign(cosign).await;
}
}
cosign = p2p_cosigns.recv() => {
let cosign = cosign.expect("p2p cosigns channel was dropped?");
if cosigning.intake_cosign(&cosign).is_ok() {
p2p.publish_cosign(cosign).await;
}
}
// Make sure this loop runs at least this often
() = tokio::time::sleep(COSIGN_LOOP_INTERVAL) => {}
}
}
});
}
async fn handle_network(
mut db: impl serai_db::Db,
message_queue: Arc<MessageQueue>,
serai: Arc<Serai>,
network: ExternalNetworkId,
) {
// Spawn the task to publish batches for this network
{
let (publish_batch_task_def, publish_batch_task) = Task::new();
tokio::spawn(
PublishBatchTask::new(db.clone(), serai.clone(), network)
.continually_run(publish_batch_task_def, vec![]),
);
// Forget its handle so it always runs in the background
core::mem::forget(publish_batch_task);
}
// Handle Processor messages
loop {
let (msg_id, msg) = {
let msg = message_queue.next(Service::Processor(network)).await;
// Check this message's sender is as expected
assert_eq!(msg.from, Service::Processor(network));
// Check this message's ID is as expected
let last = LastProcessorMessage::get(&db, network);
let next = last.map(|id| id + 1).unwrap_or(0);
// This should either be the last message's ID, if we committed but didn't send our ACK, or
// the expected next message's ID
assert!((Some(msg.id) == last) || (msg.id == next));
// TODO: Check msg.sig
// If this is the message we already handled, and just failed to ACK, ACK it now and move on
if Some(msg.id) == last {
message_queue.ack(Service::Processor(network), msg.id).await;
continue;
}
(msg.id, messages::ProcessorMessage::deserialize(&mut msg.msg.as_slice()).unwrap())
};
let mut txn = db.txn();
match msg {
messages::ProcessorMessage::KeyGen(msg) => match msg {
messages::key_gen::ProcessorMessage::Participation { session, participation } => {
let set = ExternalValidatorSet { network, session };
TributaryTransactionsFromProcessorMessages::send(
&mut txn,
set,
&Transaction::DkgParticipation { participation, signed: Signed::default() },
);
}
messages::key_gen::ProcessorMessage::GeneratedKeyPair {
session,
substrate_key,
network_key,
} => {
KeysToConfirm::set(
&mut txn,
ExternalValidatorSet { network, session },
&KeyPair(
Public(substrate_key),
ExternalKey(
network_key
.try_into()
.expect("generated a network key which exceeds the maximum key length"),
),
),
);
}
messages::key_gen::ProcessorMessage::Blame { session, participant } => {
RemoveParticipant::send(&mut txn, ExternalValidatorSet { network, session }, participant);
}
},
messages::ProcessorMessage::Sign(msg) => match msg {
messages::sign::ProcessorMessage::InvalidParticipant { session, participant } => {
RemoveParticipant::send(&mut txn, ExternalValidatorSet { network, session }, participant);
}
messages::sign::ProcessorMessage::Preprocesses { id, preprocesses } => {
let set = ExternalValidatorSet { network, session: id.session };
if id.attempt == 0 {
// Batches are declared by their intent to be signed
if let messages::sign::VariantSignId::Batch(hash) = id.id {
TributaryTransactionsFromProcessorMessages::send(
&mut txn,
set,
&Transaction::Batch { hash },
);
}
}
TributaryTransactionsFromProcessorMessages::send(
&mut txn,
set,
&Transaction::Sign {
id: id.id,
attempt: id.attempt,
round: SigningProtocolRound::Preprocess,
data: preprocesses,
signed: Signed::default(),
},
);
}
messages::sign::ProcessorMessage::Shares { id, shares } => {
let set = ExternalValidatorSet { network, session: id.session };
TributaryTransactionsFromProcessorMessages::send(
&mut txn,
set,
&Transaction::Sign {
id: id.id,
attempt: id.attempt,
round: SigningProtocolRound::Share,
data: shares,
signed: Signed::default(),
},
);
}
},
messages::ProcessorMessage::Coordinator(msg) => match msg {
messages::coordinator::ProcessorMessage::CosignedBlock { cosign } => {
SignedCosigns::send(&mut txn, &cosign);
}
messages::coordinator::ProcessorMessage::SignedBatch { batch } => {
SignedBatches::send(&mut txn, &batch);
}
messages::coordinator::ProcessorMessage::SignedSlashReport {
session,
slash_report,
signature,
} => {
SlashReports::set(
&mut txn,
ExternalValidatorSet { network, session },
slash_report,
Signature(signature),
);
}
},
messages::ProcessorMessage::Substrate(msg) => match msg {
messages::substrate::ProcessorMessage::SubstrateBlockAck { block, plans } => {
let block = BlockHash(block);
let mut by_session = HashMap::new();
for plan in plans {
by_session
.entry(plan.session)
.or_insert_with(|| Vec::with_capacity(1))
.push(plan.transaction_plan_id);
}
for (session, plans) in by_session {
let set = ExternalValidatorSet { network, session };
SubstrateBlockPlans::set(&mut txn, set, block, &plans);
TributaryTransactionsFromProcessorMessages::send(
&mut txn,
set,
&Transaction::SubstrateBlock { hash: block },
);
}
}
},
}
// Mark this as the last handled message
LastProcessorMessage::set(&mut txn, network, &msg_id);
// Commit the txn
txn.commit();
// Now that we won't handle this message again, acknowledge it so we won't see it again
message_queue.ack(Service::Processor(network), msg_id).await;
}
}
#[tokio::main]
async fn main() {
// Override the panic handler with one which will panic if any tokio task panics
{
let existing = std::panic::take_hook();
std::panic::set_hook(Box::new(move |panic| {
existing(panic);
const MSG: &str = "exiting the process due to a task panicking";
println!("{MSG}");
log::error!("{MSG}");
std::process::exit(1);
}));
}
// Initialize the logger
if std::env::var("RUST_LOG").is_err() {
std::env::set_var("RUST_LOG", serai_env::var("RUST_LOG").unwrap_or_else(|| "info".to_string()));
}
env_logger::init();
log::info!("starting coordinator service...");
// Read the Serai key from the env
let serai_key = {
let mut key_hex = serai_env::var("SERAI_KEY").expect("Serai key wasn't provided");
let mut key_vec = hex::decode(&key_hex).map_err(|_| ()).expect("Serai key wasn't hex-encoded");
key_hex.zeroize();
if key_vec.len() != 32 {
key_vec.zeroize();
panic!("Serai key had an invalid length");
}
let mut key_bytes = [0; 32];
key_bytes.copy_from_slice(&key_vec);
key_vec.zeroize();
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::from_repr(key_bytes).unwrap());
key_bytes.zeroize();
key
};
// Open the database
let mut db = coordinator_db();
let existing_tributaries_at_boot = {
let mut txn = db.txn();
// Cleanup all historic Tributaries
while let Some(to_cleanup) = TributaryCleanup::try_recv(&mut txn) {
prune_tributary_db(to_cleanup);
// Remove the keys to confirm for this network
KeysToConfirm::take(&mut txn, to_cleanup);
KeySet::take(&mut txn, to_cleanup);
// Drain the cosign intents created for this set
while !Cosigning::<Db>::intended_cosigns(&mut txn, to_cleanup).is_empty() {}
// Drain the transactions to publish for this set
while TributaryTransactionsFromProcessorMessages::try_recv(&mut txn, to_cleanup).is_some() {}
while TributaryTransactionsFromDkgConfirmation::try_recv(&mut txn, to_cleanup).is_some() {}
// Drain the participants to remove for this set
while RemoveParticipant::try_recv(&mut txn, to_cleanup).is_some() {}
// Remove the SignSlashReport notification
SignSlashReport::try_recv(&mut txn, to_cleanup);
}
// Remove retired Tributaries from ActiveTributaries
let mut active_tributaries = ActiveTributaries::get(&txn).unwrap_or(vec![]);
active_tributaries.retain(|tributary| {
RetiredTributary::get(&txn, tributary.set.network).map(|session| session.0) <
Some(tributary.set.session.0)
});
ActiveTributaries::set(&mut txn, &active_tributaries);
txn.commit();
active_tributaries
};
// Connect to the message-queue
let message_queue = Arc::new(MessageQueue::from_env(Service::Coordinator));
// Connect to the Serai node
let serai = serai().await;
let (p2p_add_tributary_send, p2p_add_tributary_recv) = mpsc::unbounded_channel();
let (p2p_retire_tributary_send, p2p_retire_tributary_recv) = mpsc::unbounded_channel();
let (p2p_cosigns_send, p2p_cosigns_recv) = mpsc::unbounded_channel();
// Spawn the P2P network
let p2p = {
let serai_keypair = {
let mut key_bytes = serai_key.to_bytes();
// Schnorrkel SecretKey is the key followed by 32 bytes of entropy for nonces
let mut expanded_key = Zeroizing::new([0; 64]);
expanded_key.as_mut_slice()[.. 32].copy_from_slice(&key_bytes);
OsRng.fill_bytes(&mut expanded_key.as_mut_slice()[32 ..]);
key_bytes.zeroize();
Zeroizing::new(
schnorrkel::SecretKey::from_bytes(expanded_key.as_slice()).unwrap().to_keypair(),
)
};
let p2p = p2p::Libp2p::new(&serai_keypair, serai.clone());
tokio::spawn(p2p::run::<Db, Transaction, _>(
db.clone(),
p2p.clone(),
p2p_add_tributary_recv,
p2p_retire_tributary_recv,
p2p_cosigns_send,
));
p2p
};
// Spawn the Substrate scanners
let (substrate_task_def, substrate_task) = Task::new();
let (substrate_canonical_task_def, substrate_canonical_task) = Task::new();
tokio::spawn(
CanonicalEventStream::new(db.clone(), serai.clone())
.continually_run(substrate_canonical_task_def, vec![substrate_task.clone()]),
);
let (substrate_ephemeral_task_def, substrate_ephemeral_task) = Task::new();
tokio::spawn(
EphemeralEventStream::new(
db.clone(),
serai.clone(),
SeraiAddress((<Ristretto as WrappedGroup>::generator() * serai_key.deref()).to_bytes()),
)
.continually_run(substrate_ephemeral_task_def, vec![substrate_task]),
);
// Spawn the cosign handler
spawn_cosigning(
db.clone(),
serai.clone(),
p2p.clone(),
// Run the Substrate scanners once we cosign new blocks
vec![substrate_canonical_task, substrate_ephemeral_task],
p2p_cosigns_recv,
);
// Spawn all Tributaries on-disk
for tributary in existing_tributaries_at_boot {
crate::tributary::spawn_tributary(
db.clone(),
message_queue.clone(),
p2p.clone(),
&p2p_add_tributary_send,
tributary,
serai_key.clone(),
)
.await;
}
// Handle the events from the Substrate scanner
tokio::spawn(
(SubstrateTask {
serai_key: serai_key.clone(),
db: db.clone(),
message_queue: message_queue.clone(),
p2p: p2p.clone(),
p2p_add_tributary: p2p_add_tributary_send.clone(),
p2p_retire_tributary: p2p_retire_tributary_send.clone(),
})
.continually_run(substrate_task_def, vec![]),
);
// Handle each of the networks
for network in ExternalNetworkId::all() {
tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network));
}
// Spawn the task to set keys
{
let (set_keys_task_def, set_keys_task) = Task::new();
tokio::spawn(
SetKeysTask::new(db.clone(), serai.clone()).continually_run(set_keys_task_def, vec![]),
);
// Forget its handle so it always runs in the background
core::mem::forget(set_keys_task);
}
// Spawn the task to publish slash reports
{
let (publish_slash_report_task_def, publish_slash_report_task) = Task::new();
tokio::spawn(
PublishSlashReportTask::new(db, serai).continually_run(publish_slash_report_task_def, vec![]),
);
// Always have this run in the background
core::mem::forget(publish_slash_report_task);
}
// Run the spawned tasks ad-infinitum
core::future::pending().await
}

View File

@@ -0,0 +1,167 @@
use core::future::Future;
use std::sync::Arc;
use zeroize::Zeroizing;
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc;
use serai_db::{DbTxn, Db as DbTrait};
use serai_client_serai::abi::primitives::{
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
};
use message_queue::{Service, Metadata, client::MessageQueue};
use tributary_sdk::Tributary;
use serai_task::ContinuallyRan;
use serai_coordinator_tributary::Transaction;
use serai_coordinator_p2p::P2p;
use crate::{Db, KeySet};
pub(crate) struct SubstrateTask<P: P2p> {
pub(crate) serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>,
pub(crate) db: Db,
pub(crate) message_queue: Arc<MessageQueue>,
pub(crate) p2p: P,
pub(crate) p2p_add_tributary:
mpsc::UnboundedSender<(ExternalValidatorSet, Tributary<Db, Transaction, P>)>,
pub(crate) p2p_retire_tributary: mpsc::UnboundedSender<ExternalValidatorSet>,
}
impl<P: P2p> ContinuallyRan for SubstrateTask<P> {
type Error = String; // TODO
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
// Handle the Canonical events
for network in ExternalNetworkId::all() {
loop {
let mut txn = self.db.txn();
let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network)
else {
break;
};
match msg {
messages::substrate::CoordinatorMessage::SetKeys { session, .. } => {
KeySet::set(&mut txn, ExternalValidatorSet { network, session }, &());
}
messages::substrate::CoordinatorMessage::SlashesReported { session } => {
let prior_retired = crate::db::RetiredTributary::get(&txn, network);
let next_to_be_retired =
prior_retired.map(|session| Session(session.0 + 1)).unwrap_or(Session(0));
assert_eq!(session, next_to_be_retired);
crate::db::RetiredTributary::set(&mut txn, network, &session);
self
.p2p_retire_tributary
.send(ExternalValidatorSet { network, session })
.expect("p2p retire_tributary channel dropped?");
}
messages::substrate::CoordinatorMessage::Block { .. } => {}
}
let msg = messages::CoordinatorMessage::from(msg);
let metadata = Metadata {
from: Service::Coordinator,
to: Service::Processor(network),
intent: msg.intent(),
};
let msg = borsh::to_vec(&msg).unwrap();
self.message_queue.queue(metadata, msg).await?;
txn.commit();
made_progress = true;
}
}
// Handle the NewSet events
loop {
let mut txn = self.db.txn();
let Some(new_set) = serai_coordinator_substrate::NewSet::try_recv(&mut txn) else { break };
if let Some(historic_session) = new_set.set.session.0.checked_sub(2) {
// We should have retired this session if we're here
if crate::db::RetiredTributary::get(&txn, new_set.set.network).map(|session| session.0) <
Some(historic_session)
{
/*
If we haven't, it's because we're processing the NewSet event before the retiry
event from the Canonical event stream. This happens if the Canonical event, and
then the NewSet event, is fired while we're already iterating over NewSet events.
We break, dropping the txn, restoring this NewSet to the database, so we'll only
handle it once a future iteration of this loop handles the retiry event.
*/
break;
}
/*
Queue this historical Tributary for deletion.
We explicitly don't queue this upon Tributary retire, instead here, to give time to
investigate retired Tributaries if questions are raised post-retiry. This gives a
week (the duration of the following session) after the Tributary has been retired to
make a backup of the data directory for any investigations.
*/
crate::db::TributaryCleanup::send(
&mut txn,
&ExternalValidatorSet {
network: new_set.set.network,
session: Session(historic_session),
},
);
}
// Save this Tributary as active to the database
{
let mut active_tributaries =
crate::db::ActiveTributaries::get(&txn).unwrap_or(Vec::with_capacity(1));
active_tributaries.push(new_set.clone());
crate::db::ActiveTributaries::set(&mut txn, &active_tributaries);
}
// Send GenerateKey to the processor
let msg = messages::key_gen::CoordinatorMessage::GenerateKey {
session: new_set.set.session,
threshold: new_set.threshold,
evrf_public_keys: new_set.evrf_public_keys.clone(),
};
let msg = messages::CoordinatorMessage::from(msg);
let metadata = Metadata {
from: Service::Coordinator,
to: Service::Processor(new_set.set.network),
intent: msg.intent(),
};
let msg = borsh::to_vec(&msg).unwrap();
self.message_queue.queue(metadata, msg).await?;
// Commit the transaction for all of this
txn.commit();
// Now spawn the Tributary
// If we reboot after committing the txn, but before this is called, this will be called
// on boot
crate::tributary::spawn_tributary(
self.db.clone(),
self.message_queue.clone(),
self.p2p.clone(),
&self.p2p_add_tributary,
new_set,
self.serai_key.clone(),
)
.await;
made_progress = true;
}
Ok(made_progress)
}
}
}

View File

@@ -0,0 +1,595 @@
use core::{future::Future, time::Duration};
use std::sync::Arc;
use zeroize::Zeroizing;
use rand_core::OsRng;
use blake2::{digest::typenum::U32, Digest, Blake2s};
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc;
use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel};
use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary};
use serai_task::{Task, TaskHandle, DoesNotError, ContinuallyRan};
use message_queue::{Service, Metadata, client::MessageQueue};
use serai_cosign::{Faulted, CosignIntent, Cosigning};
use serai_coordinator_substrate::{NewSetInformation, SignSlashReport};
use serai_coordinator_tributary::{
Topic, Transaction, ProcessorMessages, CosignIntents, RecognizedTopics, ScanTributaryTask,
};
use serai_coordinator_p2p::P2p;
use crate::{
Db, TributaryTransactionsFromProcessorMessages, TributaryTransactionsFromDkgConfirmation,
RemoveParticipant, dkg_confirmation::ConfirmDkgTask,
};
create_db! {
Coordinator {
PublishOnRecognition: (set: ExternalValidatorSet, topic: Topic) -> Transaction,
}
}
db_channel! {
Coordinator {
PendingCosigns: (set: ExternalValidatorSet) -> CosignIntent,
}
}
/// Provide a Provided Transaction to the Tributary.
///
/// This is not a well-designed function. This is specific to the context in which its called,
/// within this file. It should only be considered an internal helper for this domain alone.
async fn provide_transaction<TD: DbTrait, P: P2p>(
set: ExternalValidatorSet,
tributary: &Tributary<TD, Transaction, P>,
tx: Transaction,
) {
match tributary.provide_transaction(tx.clone()).await {
// The Tributary uses its own DB, so we may provide this multiple times if we reboot before
// committing the txn which provoked this
Ok(()) | Err(ProvidedError::AlreadyProvided) => {}
Err(ProvidedError::NotProvided) => {
panic!("providing a Transaction which wasn't a Provided transaction: {tx:?}");
}
Err(ProvidedError::InvalidProvided(e)) => {
panic!("providing an invalid Provided transaction, tx: {tx:?}, error: {e:?}")
}
// The Tributary's scan task won't advance if we don't have the Provided transactions
// present on-chain, and this enters an infinite loop to block the calling task from
// advancing
Err(ProvidedError::LocalMismatchesOnChain) => loop {
log::error!(
"Tributary {set:?} was supposed to provide {tx:?} but peers disagree, halting Tributary",
);
// Print this every five minutes as this does need to be handled
tokio::time::sleep(Duration::from_secs(5 * 60)).await;
},
}
}
/// Provides Cosign/Cosigned Transactions onto the Tributary.
pub(crate) struct ProvideCosignCosignedTransactionsTask<CD: DbTrait, TD: DbTrait, P: P2p> {
db: CD,
tributary_db: TD,
set: NewSetInformation,
tributary: Tributary<TD, Transaction, P>,
}
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan
for ProvideCosignCosignedTransactionsTask<CD, TD, P>
{
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
// Check if we produced any cosigns we were supposed to
let mut pending_notable_cosign = false;
loop {
let mut txn = self.db.txn();
// Fetch the next cosign this tributary should handle
let Some(cosign) = PendingCosigns::try_recv(&mut txn, self.set.set) else { break };
pending_notable_cosign = cosign.notable;
// If we (Serai) haven't cosigned this block, break as this is still pending
let latest = match Cosigning::<CD>::latest_cosigned_block_number(&txn) {
Ok(latest) => latest,
Err(Faulted) => {
log::error!("cosigning faulted");
Err("cosigning faulted")?
}
};
if latest < cosign.block_number {
break;
}
// Because we've cosigned it, provide the TX for that
{
let mut txn = self.tributary_db.txn();
CosignIntents::provide(&mut txn, self.set.set, &cosign);
txn.commit();
}
provide_transaction(
self.set.set,
&self.tributary,
Transaction::Cosigned { substrate_block_hash: cosign.block_hash },
)
.await;
// Clear pending_notable_cosign since this cosign isn't pending
pending_notable_cosign = false;
// Commit the txn to clear this from PendingCosigns
txn.commit();
made_progress = true;
}
// If we don't have any notable cosigns pending, provide the next set of cosign intents
if !pending_notable_cosign {
let mut txn = self.db.txn();
// intended_cosigns will only yield up to and including the next notable cosign
for cosign in Cosigning::<CD>::intended_cosigns(&mut txn, self.set.set) {
// Flag this cosign as pending
PendingCosigns::send(&mut txn, self.set.set, &cosign);
// Provide the transaction to queue it for work
provide_transaction(
self.set.set,
&self.tributary,
Transaction::Cosign { substrate_block_hash: cosign.block_hash },
)
.await;
}
txn.commit();
made_progress = true;
}
Ok(made_progress)
}
}
}
#[must_use]
async fn add_signed_unsigned_transaction<TD: DbTrait, P: P2p>(
tributary: &Tributary<TD, Transaction, P>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
mut tx: Transaction,
) -> bool {
// If this is a signed transaction, sign it
if matches!(tx.kind(), TransactionKind::Signed(_, _)) {
tx.sign(&mut OsRng, tributary.genesis(), key);
}
let res = tributary.add_transaction(tx.clone()).await;
match &res {
// Fresh publication, already published
Ok(true | false) => {}
Err(
TransactionError::TooLargeTransaction |
TransactionError::InvalidSigner |
TransactionError::InvalidSignature |
TransactionError::InvalidContent,
) => {
panic!("created an invalid transaction, tx: {tx:?}, err: {res:?}");
}
// InvalidNonce may be out-of-order TXs, not invalid ones, but we only create nonce #n+1 after
// on-chain inclusion of the TX with nonce #n, so it is invalid within our context unless the
// issue is this transaction was already included on-chain
Err(TransactionError::InvalidNonce) => {
let TransactionKind::Signed(order, signed) = tx.kind() else {
panic!("non-Signed transaction had InvalidNonce");
};
let next_nonce = tributary
.next_nonce(&signed.signer, &order)
.await
.expect("signer who is a present validator didn't have a nonce");
assert!(next_nonce != signed.nonce);
// We're publishing an old transaction
if next_nonce > signed.nonce {
return true;
}
panic!("nonce in transaction wasn't contiguous with nonce on-chain");
}
// We've published too many transactions recently
Err(TransactionError::TooManyInMempool) => {
return false;
}
// This isn't a Provided transaction so this should never be hit
Err(TransactionError::ProvidedAddedToMempool) => unreachable!(),
}
true
}
async fn add_with_recognition_check<TD: DbTrait, P: P2p>(
set: ExternalValidatorSet,
tributary_db: &mut TD,
tributary: &Tributary<TD, Transaction, P>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
tx: Transaction,
) -> bool {
let kind = tx.kind();
match kind {
TransactionKind::Provided(_) => provide_transaction(set, tributary, tx).await,
TransactionKind::Unsigned | TransactionKind::Signed(_, _) => {
// If this is a transaction with signing data, check the topic is recognized before
// publishing
let topic = tx.topic();
let still_requires_recognition = if let Some(topic) = topic {
(topic.requires_recognition() && (!RecognizedTopics::recognized(tributary_db, set, topic)))
.then_some(topic)
} else {
None
};
if let Some(topic) = still_requires_recognition {
// Queue the transaction until the topic is recognized
// We use the Tributary DB for this so it's cleaned up when the Tributary DB is
let mut tributary_txn = tributary_db.txn();
PublishOnRecognition::set(&mut tributary_txn, set, topic, &tx);
tributary_txn.commit();
} else {
// Actually add the transaction
if !add_signed_unsigned_transaction(tributary, key, tx).await {
return false;
}
}
}
}
true
}
/// Adds all of the transactions sent via `TributaryTransactionsFromProcessorMessages`.
pub(crate) struct AddTributaryTransactionsTask<CD: DbTrait, TD: DbTrait, P: P2p> {
db: CD,
tributary_db: TD,
tributary: Tributary<TD, Transaction, P>,
set: NewSetInformation,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
}
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for AddTributaryTransactionsTask<CD, TD, P> {
type Error = DoesNotError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
// Provide/add all transactions sent our way
loop {
let mut txn = self.db.txn();
let Some(tx) = TributaryTransactionsFromDkgConfirmation::try_recv(&mut txn, self.set.set)
else {
break;
};
if !add_with_recognition_check(
self.set.set,
&mut self.tributary_db,
&self.tributary,
&self.key,
tx,
)
.await
{
break;
}
made_progress = true;
txn.commit();
}
loop {
let mut txn = self.db.txn();
let Some(tx) = TributaryTransactionsFromProcessorMessages::try_recv(&mut txn, self.set.set)
else {
break;
};
if !add_with_recognition_check(
self.set.set,
&mut self.tributary_db,
&self.tributary,
&self.key,
tx,
)
.await
{
break;
}
made_progress = true;
txn.commit();
}
// Provide/add all transactions due to newly recognized topics
loop {
let mut tributary_txn = self.tributary_db.txn();
let Some(topic) =
RecognizedTopics::try_recv_topic_requiring_recognition(&mut tributary_txn, self.set.set)
else {
break;
};
if let Some(tx) = PublishOnRecognition::take(&mut tributary_txn, self.set.set, topic) {
if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await {
break;
}
}
made_progress = true;
tributary_txn.commit();
}
// Publish any participant removals
loop {
let mut txn = self.db.txn();
let Some(participant) = RemoveParticipant::try_recv(&mut txn, self.set.set) else { break };
let tx = Transaction::RemoveParticipant {
participant: self.set.participant_indexes_reverse_lookup[&participant],
signed: Default::default(),
};
if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await {
break;
}
made_progress = true;
txn.commit();
}
Ok(made_progress)
}
}
}
/// Takes the messages from ScanTributaryTask and publishes them to the message-queue.
pub(crate) struct TributaryProcessorMessagesTask<TD: DbTrait> {
tributary_db: TD,
set: ExternalValidatorSet,
message_queue: Arc<MessageQueue>,
}
impl<TD: DbTrait> ContinuallyRan for TributaryProcessorMessagesTask<TD> {
type Error = String; // TODO
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
loop {
let mut txn = self.tributary_db.txn();
let Some(msg) = ProcessorMessages::try_recv(&mut txn, self.set) else { break };
let metadata = Metadata {
from: Service::Coordinator,
to: Service::Processor(self.set.network),
intent: msg.intent(),
};
let msg = borsh::to_vec(&msg).unwrap();
self.message_queue.queue(metadata, msg).await?;
txn.commit();
made_progress = true;
}
Ok(made_progress)
}
}
}
/// Checks for the notification to sign a slash report and does so if present.
pub(crate) struct SignSlashReportTask<CD: DbTrait, TD: DbTrait, P: P2p> {
db: CD,
tributary_db: TD,
tributary: Tributary<TD, Transaction, P>,
set: NewSetInformation,
key: Zeroizing<<Ristretto as WrappedGroup>::F>,
}
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for SignSlashReportTask<CD, TD, P> {
type Error = DoesNotError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut txn = self.db.txn();
let Some(()) = SignSlashReport::try_recv(&mut txn, self.set.set) else { return Ok(false) };
// Fetch the slash report for this Tributary
let mut tx =
serai_coordinator_tributary::slash_report_transaction(&self.tributary_db, &self.set);
tx.sign(&mut OsRng, self.tributary.genesis(), &self.key);
let res = self.tributary.add_transaction(tx.clone()).await;
match &res {
// Fresh publication, already published
Ok(true | false) => {}
Err(
TransactionError::TooLargeTransaction |
TransactionError::InvalidSigner |
TransactionError::InvalidNonce |
TransactionError::InvalidSignature |
TransactionError::InvalidContent,
) => {
panic!("created an invalid SlashReport transaction, tx: {tx:?}, err: {res:?}");
}
// We've published too many transactions recently
// Drop this txn to try to publish it again later on a future iteration
Err(TransactionError::TooManyInMempool) => {
drop(txn);
return Ok(false);
}
// This isn't a Provided transaction so this should never be hit
Err(TransactionError::ProvidedAddedToMempool) => unreachable!(),
}
txn.commit();
Ok(true)
}
}
}
/// Run the scan task whenever the Tributary adds a new block.
async fn scan_on_new_block<CD: DbTrait, TD: DbTrait, P: P2p>(
db: CD,
set: ExternalValidatorSet,
tributary: Tributary<TD, Transaction, P>,
scan_tributary_task: TaskHandle,
tasks_to_keep_alive: Vec<TaskHandle>,
) {
loop {
// Break once this Tributary is retired
if crate::RetiredTributary::get(&db, set.network).map(|session| session.0) >=
Some(set.session.0)
{
drop(tasks_to_keep_alive);
break;
}
// Have the tributary scanner run as soon as there's a new block
match tributary.next_block_notification().await.await {
Ok(()) => scan_tributary_task.run_now(),
// unreachable since this owns the tributary object and doesn't drop it
Err(_) => panic!("tributary was dropped causing notification to error"),
}
}
}
/// Spawn a Tributary.
///
/// This will:
/// - Spawn the Tributary
/// - Inform the P2P network of the Tributary
/// - Spawn the ScanTributaryTask
/// - Spawn the ProvideCosignCosignedTransactionsTask
/// - Spawn the TributaryProcessorMessagesTask
/// - Spawn the AddTributaryTransactionsTask
/// - Spawn the ConfirmDkgTask
/// - Spawn the SignSlashReportTask
/// - Iterate the scan task whenever a new block occurs (not just on the standard interval)
pub(crate) async fn spawn_tributary<P: P2p>(
db: Db,
message_queue: Arc<MessageQueue>,
p2p: P,
p2p_add_tributary: &mpsc::UnboundedSender<(ExternalValidatorSet, Tributary<Db, Transaction, P>)>,
set: NewSetInformation,
serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>,
) {
// Don't spawn retired Tributaries
if crate::db::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >=
Some(set.set.session.0)
{
return;
}
let genesis =
<[u8; 32]>::from(Blake2s::<U32>::digest(borsh::to_vec(&(set.serai_block, set.set)).unwrap()));
// Since the Serai block will be finalized, then cosigned, before we handle this, this time will
// be a couple of minutes stale. While the Tributary will still function with a start time in the
// past, the Tributary will immediately incur round timeouts. We reduce these by adding a
// constant delay of a couple of minutes.
const TRIBUTARY_START_TIME_DELAY: u64 = 120;
let start_time = set.declaration_time + TRIBUTARY_START_TIME_DELAY;
let mut tributary_validators = Vec::with_capacity(set.validators.len());
for (validator, weight) in set.validators.iter().copied() {
let validator_key = <Ristretto as GroupIo>::read_G(&mut validator.0.as_slice())
.expect("Serai validator had an invalid public key");
let weight = u64::from(weight);
tributary_validators.push((validator_key, weight));
}
// Spawn the Tributary
let tributary_db = crate::db::tributary_db(set.set);
let tributary = Tributary::new(
tributary_db.clone(),
genesis,
start_time,
serai_key.clone(),
tributary_validators,
p2p,
)
.await
.unwrap();
let reader = tributary.reader();
// Inform the P2P network
p2p_add_tributary
.send((set.set, tributary.clone()))
.expect("p2p's add_tributary channel was closed?");
// Spawn the task to provide Cosign/Cosigned transactions onto the Tributary
let (provide_cosign_cosigned_transactions_task_def, provide_cosign_cosigned_transactions_task) =
Task::new();
tokio::spawn(
(ProvideCosignCosignedTransactionsTask {
db: db.clone(),
tributary_db: tributary_db.clone(),
set: set.clone(),
tributary: tributary.clone(),
})
.continually_run(provide_cosign_cosigned_transactions_task_def, vec![]),
);
// Spawn the task to send all messages from the Tributary scanner to the message-queue
let (scan_tributary_messages_task_def, scan_tributary_messages_task) = Task::new();
tokio::spawn(
(TributaryProcessorMessagesTask {
tributary_db: tributary_db.clone(),
set: set.set,
message_queue,
})
.continually_run(scan_tributary_messages_task_def, vec![]),
);
// Spawn the scan task
let (scan_tributary_task_def, scan_tributary_task) = Task::new();
tokio::spawn(
ScanTributaryTask::<_, P>::new(tributary_db.clone(), set.clone(), reader)
// This is the only handle for this TributaryProcessorMessagesTask, so when this task is
// dropped, it will be too
.continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]),
);
// Spawn the add transactions task
let (add_tributary_transactions_task_def, add_tributary_transactions_task) = Task::new();
tokio::spawn(
(AddTributaryTransactionsTask {
db: db.clone(),
tributary_db: tributary_db.clone(),
tributary: tributary.clone(),
set: set.clone(),
key: serai_key.clone(),
})
.continually_run(add_tributary_transactions_task_def, vec![]),
);
// Spawn the task to confirm the DKG result
let (confirm_dkg_task_def, confirm_dkg_task) = Task::new();
tokio::spawn(
ConfirmDkgTask::new(db.clone(), set.clone(), tributary_db.clone(), serai_key.clone())
.continually_run(confirm_dkg_task_def, vec![add_tributary_transactions_task]),
);
// Spawn the sign slash report task
let (sign_slash_report_task_def, sign_slash_report_task) = Task::new();
tokio::spawn(
(SignSlashReportTask {
db: db.clone(),
tributary_db,
tributary: tributary.clone(),
set: set.clone(),
key: serai_key,
})
.continually_run(sign_slash_report_task_def, vec![]),
);
// Whenever a new block occurs, immediately run the scan task
// This function also preserves the ProvideCosignCosignedTransactionsTask handle until the
// Tributary is retired, ensuring it isn't dropped prematurely and that the task don't run ad
// infinitum
tokio::spawn(scan_on_new_block(
db,
set.set,
tributary,
scan_tributary_task,
vec![provide_cosign_cosigned_transactions_task, confirm_dkg_task, sign_slash_report_task],
));
}

View File

@@ -1,6 +0,0 @@
mod transaction;
pub use transaction::Transaction;
mod db;
mod scan;

View File

@@ -1,408 +0,0 @@
use core::future::Future;
use std::collections::HashMap;
use ciphersuite::group::GroupEncoding;
use serai_client::{
primitives::SeraiAddress,
validator_sets::primitives::{ValidatorSet, Slash},
};
use tributary::{
Signed as TributarySigned, TransactionKind, TransactionTrait,
Transaction as TributaryTransaction, Block, TributaryReader,
tendermint::{
tx::{TendermintTx, Evidence, decode_signed_message},
TendermintNetwork,
},
};
use serai_db::*;
use serai_task::ContinuallyRan;
use messages::sign::VariantSignId;
use crate::tributary::{
db::*,
transaction::{SigningProtocolRound, Signed, Transaction},
};
struct ScanBlock<'a, D: DbTxn, TD: Db> {
txn: &'a mut D,
set: ValidatorSet,
validators: &'a [SeraiAddress],
total_weight: u64,
validator_weights: &'a HashMap<SeraiAddress, u64>,
tributary: &'a TributaryReader<TD, Transaction>,
}
impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> {
fn potentially_start_cosign(&mut self) {
// Don't start a new cosigning instance if we're actively running one
if TributaryDb::actively_cosigning(self.txn, self.set) {
return;
}
// Start cosigning the latest intended-to-be-cosigned block
let Some(latest_substrate_block_to_cosign) =
TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set)
else {
return;
};
let substrate_block_number = todo!("TODO");
// Mark us as actively cosigning
TributaryDb::start_cosigning(self.txn, self.set, substrate_block_number);
// Send the message for the processor to start signing
TributaryDb::send_message(
self.txn,
self.set,
messages::coordinator::CoordinatorMessage::CosignSubstrateBlock {
session: self.set.session,
block_number: substrate_block_number,
block: latest_substrate_block_to_cosign,
},
);
}
fn handle_application_tx(&mut self, block_number: u64, tx: Transaction) {
let signer = |signed: Signed| SeraiAddress(signed.signer.to_bytes());
if let TransactionKind::Signed(_, TributarySigned { signer, .. }) = tx.kind() {
// Don't handle transactions from those fatally slashed
// TODO: The fact they can publish these TXs makes this a notable spam vector
if TributaryDb::is_fatally_slashed(self.txn, self.set, SeraiAddress(signer.to_bytes())) {
return;
}
}
match tx {
// Accumulate this vote and fatally slash the participant if past the threshold
Transaction::RemoveParticipant { participant, signed } => {
let signer = signer(signed);
// Check the participant voted to be removed actually exists
if !self.validators.iter().any(|validator| *validator == participant) {
TributaryDb::fatal_slash(
self.txn,
self.set,
signer,
"voted to remove non-existent participant",
);
return;
}
match TributaryDb::accumulate(
self.txn,
self.set,
self.validators,
self.total_weight,
block_number,
Topic::RemoveParticipant { participant },
signer,
self.validator_weights[&signer],
&(),
) {
DataSet::None => {}
DataSet::Participating(_) => {
TributaryDb::fatal_slash(self.txn, self.set, participant, "voted to remove");
}
};
}
// Send the participation to the processor
Transaction::DkgParticipation { participation, signed } => {
TributaryDb::send_message(
self.txn,
self.set,
messages::key_gen::CoordinatorMessage::Participation {
session: self.set.session,
participant: todo!("TODO"),
participation,
},
);
}
Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => {
// Accumulate the preprocesses into our own FROST attempt manager
todo!("TODO")
}
Transaction::DkgConfirmationShare { attempt, share, signed } => {
// Accumulate the shares into our own FROST attempt manager
todo!("TODO")
}
Transaction::Cosign { substrate_block_hash } => {
// Update the latest intended-to-be-cosigned Substrate block
TributaryDb::set_latest_substrate_block_to_cosign(self.txn, self.set, substrate_block_hash);
// Start a new cosign if we weren't already working on one
self.potentially_start_cosign();
}
Transaction::Cosigned { substrate_block_hash } => {
TributaryDb::finish_cosigning(self.txn, self.set);
// Fetch the latest intended-to-be-cosigned block
let Some(latest_substrate_block_to_cosign) =
TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set)
else {
return;
};
// If this is the block we just cosigned, return, preventing us from signing it again
if latest_substrate_block_to_cosign == substrate_block_hash {
return;
}
// Since we do have a new cosign to work on, start it
self.potentially_start_cosign();
}
Transaction::SubstrateBlock { hash } => {
// Whitelist all of the IDs this Substrate block causes to be signed
todo!("TODO")
}
Transaction::Batch { hash } => {
// Whitelist the signing of this batch, publishing our own preprocess
todo!("TODO")
}
Transaction::SlashReport { slash_points, signed } => {
let signer = signer(signed);
if slash_points.len() != self.validators.len() {
TributaryDb::fatal_slash(
self.txn,
self.set,
signer,
"slash report was for a distinct amount of signers",
);
return;
}
// Accumulate, and if past the threshold, calculate *the* slash report and start signing it
match TributaryDb::accumulate(
self.txn,
self.set,
self.validators,
self.total_weight,
block_number,
Topic::SlashReport,
signer,
self.validator_weights[&signer],
&slash_points,
) {
DataSet::None => {}
DataSet::Participating(data_set) => {
// Find the median reported slashes for this validator
// TODO: This lets 34% perform a fatal slash. Should that be allowed?
let mut median_slash_report = Vec::with_capacity(self.validators.len());
for i in 0 .. self.validators.len() {
let mut this_validator =
data_set.values().map(|report| report[i]).collect::<Vec<_>>();
this_validator.sort_unstable();
// Choose the median, where if there are two median values, the lower one is chosen
let median_index = if (this_validator.len() % 2) == 1 {
this_validator.len() / 2
} else {
(this_validator.len() / 2) - 1
};
median_slash_report.push(this_validator[median_index]);
}
// We only publish slashes for the `f` worst performers to:
// 1) Effect amnesty if there were network disruptions which affected everyone
// 2) Ensure the signing threshold doesn't have a disincentive to do their job
// Find the worst performer within the signing threshold's slash points
let f = (self.validators.len() - 1) / 3;
let worst_validator_in_supermajority_slash_points = {
let mut sorted_slash_points = median_slash_report.clone();
sorted_slash_points.sort_unstable();
// This won't be a valid index if `f == 0`, which means we don't have any validators
// to slash
let index_of_first_validator_to_slash = self.validators.len() - f;
let index_of_worst_validator_in_supermajority = index_of_first_validator_to_slash - 1;
sorted_slash_points[index_of_worst_validator_in_supermajority]
};
// Perform the amortization
for slash_points in &mut median_slash_report {
*slash_points =
slash_points.saturating_sub(worst_validator_in_supermajority_slash_points)
}
let amortized_slash_report = median_slash_report;
// Create the resulting slash report
let mut slash_report = vec![];
for (validator, points) in self.validators.iter().copied().zip(amortized_slash_report) {
if points != 0 {
slash_report.push(Slash { key: validator.into(), points });
}
}
assert!(slash_report.len() <= f);
// Recognize the topic for signing the slash report
TributaryDb::recognize_topic(
self.txn,
self.set,
Topic::Sign {
id: VariantSignId::SlashReport,
attempt: 0,
round: SigningProtocolRound::Preprocess,
},
);
// Send the message for the processor to start signing
TributaryDb::send_message(
self.txn,
self.set,
messages::coordinator::CoordinatorMessage::SignSlashReport {
session: self.set.session,
report: slash_report,
},
);
}
};
}
Transaction::Sign { id, attempt, round, data, signed } => {
let topic = Topic::Sign { id, attempt, round };
let signer = signer(signed);
if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] {
TributaryDb::fatal_slash(
self.txn,
self.set,
signer,
"signer signed with a distinct amount of key shares than they had key shares",
);
return;
}
match TributaryDb::accumulate(
self.txn,
self.set,
self.validators,
self.total_weight,
block_number,
topic,
signer,
self.validator_weights[&signer],
&data,
) {
DataSet::None => {}
DataSet::Participating(data_set) => {
let id = topic.sign_id(self.set).expect("Topic::Sign didn't have SignId");
let flatten_data_set = |data_set| todo!("TODO");
let data_set = flatten_data_set(data_set);
TributaryDb::send_message(
self.txn,
self.set,
match round {
SigningProtocolRound::Preprocess => {
messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set }
}
SigningProtocolRound::Share => {
messages::sign::CoordinatorMessage::Shares { id, shares: data_set }
}
},
)
}
};
}
}
}
fn handle_block(mut self, block_number: u64, block: Block<Transaction>) {
TributaryDb::start_of_block(self.txn, self.set, block_number);
for tx in block.transactions {
match tx {
TributaryTransaction::Tendermint(TendermintTx::SlashEvidence(ev)) => {
// Since the evidence is on the chain, it will have already been validated
// We can just punish the signer
let data = match ev {
Evidence::ConflictingMessages(first, second) => (first, Some(second)),
Evidence::InvalidPrecommit(first) | Evidence::InvalidValidRound(first) => (first, None),
};
/* TODO
let msgs = (
decode_signed_message::<TendermintNetwork<D, Transaction, P>>(&data.0).unwrap(),
if data.1.is_some() {
Some(
decode_signed_message::<TendermintNetwork<D, Transaction, P>>(&data.1.unwrap())
.unwrap(),
)
} else {
None
},
);
// Since anything with evidence is fundamentally faulty behavior, not just temporal
// errors, mark the node as fatally slashed
TributaryDb::fatal_slash(
self.txn, msgs.0.msg.sender, &format!("invalid tendermint messages: {msgs:?}"));
*/
todo!("TODO")
}
TributaryTransaction::Application(tx) => {
self.handle_application_tx(block_number, tx);
}
}
}
}
}
struct ScanTributaryTask<D: Db, TD: Db> {
db: D,
set: ValidatorSet,
validators: Vec<SeraiAddress>,
total_weight: u64,
validator_weights: HashMap<SeraiAddress, u64>,
tributary: TributaryReader<TD, Transaction>,
}
impl<D: Db, TD: Db> ContinuallyRan for ScanTributaryTask<D, TD> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
async move {
let (mut last_block_number, mut last_block_hash) =
TributaryDb::last_handled_tributary_block(&self.db, self.set)
.unwrap_or((0, self.tributary.genesis()));
let mut made_progess = false;
while let Some(next) = self.tributary.block_after(&last_block_hash) {
let block = self.tributary.block(&next).unwrap();
let block_number = last_block_number + 1;
let block_hash = block.hash();
// Make sure we have all of the provided transactions for this block
for tx in &block.transactions {
let TransactionKind::Provided(order) = tx.kind() else {
continue;
};
// make sure we have all the provided txs in this block locally
if !self.tributary.locally_provided_txs_in_block(&block_hash, order) {
return Err(format!(
"didn't have the provided Transactions on-chain for set (ephemeral error): {:?}",
self.set
));
}
}
let mut txn = self.db.txn();
(ScanBlock {
txn: &mut txn,
set: self.set,
validators: &self.validators,
total_weight: self.total_weight,
validator_weights: &self.validator_weights,
tributary: &self.tributary,
})
.handle_block(block_number, block);
TributaryDb::set_last_handled_tributary_block(&mut txn, self.set, block_number, block_hash);
last_block_number = block_number;
last_block_hash = block_hash;
txn.commit();
made_progess = true;
}
Ok(made_progess)
}
}
}

View File

@@ -1,338 +0,0 @@
use core::{ops::Deref, fmt::Debug};
use std::io;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use blake2::{digest::typenum::U32, Digest, Blake2b};
use ciphersuite::{
group::{ff::Field, GroupEncoding},
Ciphersuite, Ristretto,
};
use schnorr::SchnorrSignature;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET};
use messages::sign::VariantSignId;
use tributary::{
ReadWrite,
transaction::{
Signed as TributarySigned, TransactionError, TransactionKind, Transaction as TransactionTrait,
},
};
/// The round this data is for, within a signing protocol.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
pub enum SigningProtocolRound {
/// A preprocess.
Preprocess,
/// A signature share.
Share,
}
impl SigningProtocolRound {
fn nonce(&self) -> u32 {
match self {
SigningProtocolRound::Preprocess => 0,
SigningProtocolRound::Share => 1,
}
}
}
/// `tributary::Signed` but without the nonce.
///
/// All of our nonces are deterministic to the type of transaction and fields within.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct Signed {
/// The signer.
pub signer: <Ristretto as Ciphersuite>::G,
/// The signature.
pub signature: SchnorrSignature<Ristretto>,
}
impl BorshSerialize for Signed {
fn serialize<W: io::Write>(&self, writer: &mut W) -> Result<(), io::Error> {
writer.write_all(self.signer.to_bytes().as_ref())?;
self.signature.write(writer)
}
}
impl BorshDeserialize for Signed {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> Result<Self, io::Error> {
let signer = Ristretto::read_G(reader)?;
let signature = SchnorrSignature::read(reader)?;
Ok(Self { signer, signature })
}
}
impl Signed {
/// Provide a nonce to convert a `Signed` into a `tributary::Signed`.
fn nonce(&self, nonce: u32) -> TributarySigned {
TributarySigned { signer: self.signer, nonce, signature: self.signature }
}
}
/// The Tributary transaction definition used by Serai
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Transaction {
/// A vote to remove a participant for invalid behavior
RemoveParticipant {
/// The participant to remove
participant: SeraiAddress,
/// The transaction's signer and signature
signed: Signed,
},
/// A participation in the DKG
DkgParticipation {
participation: Vec<u8>,
/// The transaction's signer and signature
signed: Signed,
},
/// The preprocess to confirm the DKG results on-chain
DkgConfirmationPreprocess {
/// The attempt number of this signing protocol
attempt: u32,
// The preprocess
preprocess: [u8; 64],
/// The transaction's signer and signature
signed: Signed,
},
/// The signature share to confirm the DKG results on-chain
DkgConfirmationShare {
/// The attempt number of this signing protocol
attempt: u32,
// The signature share
share: [u8; 32],
/// The transaction's signer and signature
signed: Signed,
},
/// Intend to co-sign a finalized Substrate block
///
/// When the time comes to start a new co-signing protocol, the most recent Substrate block will
/// be the one selected to be cosigned.
Cosign {
/// The hash of the Substrate block to sign
substrate_block_hash: [u8; 32],
},
/// The cosign for a Substrate block
///
/// After producing this cosign, we need to start work on the latest intended-to-be cosigned
/// block. That requires agreement on when this cosign was produced, which we solve by embedding
/// this cosign on chain.
///
/// We ideally don't have this transaction at all. The coordinator, without access to any of the
/// key shares, could observe the FROST signing session and determine a successful completion.
/// Unfortunately, that functionality is not present in modular-frost, so we do need to support
/// *some* asynchronous flow (where the processor or P2P network informs us of the successful
/// completion).
///
/// If we use a `Provided` transaction, that requires everyone observe this cosign.
///
/// If we use an `Unsigned` transaction, we can't verify the cosign signature inside
/// `Transaction::verify` unless we embedded the full `SignedCosign` on-chain. The issue is since
/// a Tributary is stateless with regards to the on-chain logic, including `Transaction::verify`,
/// we can't verify the signature against the group's public key unless we also include that (but
/// then we open a DoS where arbitrary group keys are specified to cause inclusion of arbitrary
/// blobs on chain).
///
/// If we use a `Signed` transaction, we mitigate the DoS risk by having someone to fatally
/// slash. We have horrible performance though as for 100 validators, all 100 will publish this
/// transaction.
///
/// We could use a signed `Unsigned` transaction, where it includes a signer and signature but
/// isn't technically a Signed transaction. This lets us de-duplicate the transaction premised on
/// its contents.
///
/// The optimal choice is likely to use a `Provided` transaction. We don't actually need to
/// observe the produced cosign (which is ephemeral). As long as it's agreed the cosign in
/// question no longer needs to produced, which would mean the cosigning protocol at-large
/// cosigning the block in question, it'd be safe to provide this and move on to the next cosign.
Cosigned { substrate_block_hash: [u8; 32] },
/// Acknowledge a Substrate block
///
/// This is provided after the block has been cosigned.
///
/// With the acknowledgement of a Substrate block, we can whitelist all the `VariantSignId`s
/// resulting from its handling.
SubstrateBlock {
/// The hash of the Substrate block
hash: [u8; 32],
},
/// Acknowledge a Batch
///
/// Once everyone has acknowledged the Batch, we can begin signing it.
Batch {
/// The hash of the Batch's serialization.
///
/// Generally, we refer to a Batch by its ID/the hash of its instructions. Here, we want to
/// ensure consensus on the Batch, and achieving consensus on its hash is the most effective
/// way to do that.
hash: [u8; 32],
},
/// Data from a signing protocol.
Sign {
/// The ID of the object being signed
id: VariantSignId,
/// The attempt number of this signing protocol
attempt: u32,
/// The round this data is for, within the signing protocol
round: SigningProtocolRound,
/// The data itself
///
/// There will be `n` blobs of data where `n` is the amount of key shares the validator sending
/// this transaction has.
data: Vec<Vec<u8>>,
/// The transaction's signer and signature
signed: Signed,
},
/// The local view of slashes observed by the transaction's sender
SlashReport {
/// The slash points accrued by each validator
slash_points: Vec<u32>,
/// The transaction's signer and signature
signed: Signed,
},
}
impl ReadWrite for Transaction {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
borsh::from_reader(reader)
}
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
borsh::to_writer(writer, self)
}
}
impl TransactionTrait for Transaction {
fn kind(&self) -> TransactionKind {
match self {
Transaction::RemoveParticipant { participant, signed } => {
TransactionKind::Signed((b"RemoveParticipant", participant).encode(), signed.nonce(0))
}
Transaction::DkgParticipation { signed, .. } => {
TransactionKind::Signed(b"DkgParticipation".encode(), signed.nonce(0))
}
Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => {
TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(0))
}
Transaction::DkgConfirmationShare { attempt, signed, .. } => {
TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(1))
}
Transaction::Cosign { .. } => TransactionKind::Provided("CosignSubstrateBlock"),
Transaction::Cosigned { .. } => TransactionKind::Provided("Cosigned"),
Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"),
Transaction::Batch { .. } => TransactionKind::Provided("Batch"),
Transaction::Sign { id, attempt, round, signed, .. } => {
TransactionKind::Signed((b"Sign", id, attempt).encode(), signed.nonce(round.nonce()))
}
Transaction::SlashReport { signed, .. } => {
TransactionKind::Signed(b"SlashReport".encode(), signed.nonce(0))
}
}
}
fn hash(&self) -> [u8; 32] {
let mut tx = ReadWrite::serialize(self);
if let TransactionKind::Signed(_, signed) = self.kind() {
// Make sure the part we're cutting off is the signature
assert_eq!(tx.drain((tx.len() - 64) ..).collect::<Vec<_>>(), signed.signature.serialize());
}
Blake2b::<U32>::digest(&tx).into()
}
// This is a stateless verification which we use to enforce some size limits.
fn verify(&self) -> Result<(), TransactionError> {
#[allow(clippy::match_same_arms)]
match self {
// Fixed-length TX
Transaction::RemoveParticipant { .. } => {}
// TODO: MAX_DKG_PARTICIPATION_LEN
Transaction::DkgParticipation { .. } => {}
// These are fixed-length TXs
Transaction::DkgConfirmationPreprocess { .. } | Transaction::DkgConfirmationShare { .. } => {}
// Provided TXs
Transaction::Cosign { .. } |
Transaction::Cosigned { .. } |
Transaction::SubstrateBlock { .. } |
Transaction::Batch { .. } => {}
Transaction::Sign { data, .. } => {
if data.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() {
Err(TransactionError::InvalidContent)?
}
// TODO: MAX_SIGN_LEN
}
Transaction::SlashReport { slash_points, .. } => {
if slash_points.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() {
Err(TransactionError::InvalidContent)?
}
}
};
Ok(())
}
}
impl Transaction {
// Sign a transaction
//
// Panics if signing a transaction type which isn't `TransactionKind::Signed`
pub fn sign<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
genesis: [u8; 32],
key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
) {
fn signed(tx: &mut Transaction) -> &mut Signed {
#[allow(clippy::match_same_arms)] // This doesn't make semantic sense here
match tx {
Transaction::RemoveParticipant { ref mut signed, .. } |
Transaction::DkgParticipation { ref mut signed, .. } |
Transaction::DkgConfirmationPreprocess { ref mut signed, .. } => signed,
Transaction::DkgConfirmationShare { ref mut signed, .. } => signed,
Transaction::Cosign { .. } => panic!("signing CosignSubstrateBlock"),
Transaction::Cosigned { .. } => panic!("signing Cosigned"),
Transaction::SubstrateBlock { .. } => panic!("signing SubstrateBlock"),
Transaction::Batch { .. } => panic!("signing Batch"),
Transaction::Sign { ref mut signed, .. } => signed,
Transaction::SlashReport { ref mut signed, .. } => signed,
}
}
// Decide the nonce to sign with
let sig_nonce = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(rng));
{
// Set the signer and the nonce
let signed = signed(self);
signed.signer = Ristretto::generator() * key.deref();
signed.signature.R = <Ristretto as Ciphersuite>::generator() * sig_nonce.deref();
}
// Get the signature hash (which now includes `R || A` making it valid as the challenge)
let sig_hash = self.sig_hash(genesis);
// Sign the signature
signed(self).signature = SchnorrSignature::<Ristretto>::sign(key, sig_nonce, sig_hash);
}
}

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.81"
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
@@ -18,14 +18,17 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
bitvec = { version = "1", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] }
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
futures = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2023-2024 Luke Parker
Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,6 +1,6 @@
# Serai Coordinate Substrate Scanner
# Serai Coordinator Substrate
This is the scanner of the Serai blockchain for the purposes of Serai's coordinator.
This crate manages the Serai coordinators's interactions with Serai's Substrate blockchain.
Two event streams are defined:
@@ -12,3 +12,9 @@ Two event streams are defined:
The canonical event stream is available without provision of a validator's public key. The ephemeral
event stream requires provision of a validator's public key. Both are ordered within themselves, yet
there are no ordering guarantees across the two.
Additionally, a collection of tasks are defined to publish data onto Serai:
- `SetKeysTask`, which sets the keys generated via DKGs onto Serai.
- `PublishBatchTask`, which publishes `Batch`s onto Serai.
- `PublishSlashReportTask`, which publishes `SlashReport`s onto Serai.

View File

@@ -1,8 +1,15 @@
use std::future::Future;
use core::future::Future;
use std::sync::Arc;
use futures::stream::{StreamExt, FuturesOrdered};
use serai_client::Serai;
use serai_client_serai::{
abi::{
self,
primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet},
},
Serai,
};
use messages::substrate::{InInstructionResult, ExecutedBatch, CoordinatorMessage};
@@ -14,26 +21,29 @@ use serai_cosign::Cosigning;
create_db!(
CoordinatorSubstrateCanonical {
NextBlock: () -> u64,
LastIndexedBatchId: (network: ExternalNetworkId) -> u32,
}
);
/// The event stream for canonical events.
pub struct CanonicalEventStream<D: Db> {
db: D,
serai: Serai,
serai: Arc<Serai>,
}
impl<D: Db> CanonicalEventStream<D> {
/// Create a new canonical event stream.
///
/// Only one of these may exist over the provided database.
pub fn new(db: D, serai: Serai) -> Self {
pub fn new(db: D, serai: Arc<Serai>) -> Self {
Self { db, serai }
}
}
impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, String>> {
type Error = String;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let next_block = NextBlock::get(&self.db).unwrap_or(0);
let latest_finalized_block =
@@ -42,10 +52,10 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
// These are all the events which generate canonical messages
struct CanonicalEvents {
time: u64,
key_gen_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>,
set_retired_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>,
batch_events: Vec<serai_client::in_instructions::InInstructionsEvent>,
burn_events: Vec<serai_client::coins::CoinsEvent>,
set_keys_events: Vec<abi::validator_sets::Event>,
slash_report_events: Vec<abi::validator_sets::Event>,
batch_events: Vec<abi::in_instructions::Event>,
burn_events: Vec<abi::coins::Event>,
}
// For a cosigned block, fetch all relevant events
@@ -63,40 +73,24 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
}
Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()),
};
let temporal_serai = serai.as_of(block_hash);
let temporal_serai_validators = temporal_serai.validator_sets();
let temporal_serai_instructions = temporal_serai.in_instructions();
let temporal_serai_coins = temporal_serai.coins();
let (block, key_gen_events, set_retired_events, batch_events, burn_events) =
tokio::try_join!(
serai.block(block_hash),
temporal_serai_validators.key_gen_events(),
temporal_serai_validators.set_retired_events(),
temporal_serai_instructions.batch_events(),
temporal_serai_coins.burn_with_instruction_events(),
)
.map_err(|e| format!("{e:?}"))?;
let Some(block) = block else {
let events = serai.events(block_hash).await.map_err(|e| format!("{e}"))?;
let set_keys_events = events.validator_sets().set_keys_events().cloned().collect();
let slash_report_events =
events.validator_sets().slash_report_events().cloned().collect();
let batch_events = events.in_instructions().batch_events().cloned().collect();
let burn_events = events.coins().burn_with_instruction_events().cloned().collect();
let Some(block) = serai.block(block_hash).await.map_err(|e| format!("{e:?}"))? else {
Err(format!("Serai node didn't have cosigned block #{block_number}"))?
};
let time = if block_number == 0 {
block.time().unwrap_or(0)
} else {
// Serai's block time is in milliseconds
block
.time()
.ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? /
1000
};
// We use time in seconds, not milliseconds, here
let time = block.header.unix_time_in_millis() / 1000;
Ok((
block_number,
CanonicalEvents {
time,
key_gen_events,
set_retired_events,
set_keys_events,
slash_report_events,
batch_events,
burn_events,
},
@@ -128,10 +122,9 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
let mut txn = self.db.txn();
for key_gen in block.key_gen_events {
let serai_client::validator_sets::ValidatorSetsEvent::KeyGen { set, key_pair } = &key_gen
else {
panic!("KeyGen event wasn't a KeyGen event: {key_gen:?}");
for set_keys in block.set_keys_events {
let abi::validator_sets::Event::SetKeys { set, key_pair } = &set_keys else {
panic!("`SetKeys` event wasn't a `SetKeys` event: {set_keys:?}");
};
crate::Canonical::send(
&mut txn,
@@ -144,10 +137,9 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
);
}
for set_retired in block.set_retired_events {
let serai_client::validator_sets::ValidatorSetsEvent::SetRetired { set } = &set_retired
else {
panic!("SetRetired event wasn't a SetRetired event: {set_retired:?}");
for slash_report in block.slash_report_events {
let abi::validator_sets::Event::SlashReport { set } = &slash_report else {
panic!("`SlashReport` event wasn't a `SlashReport` event: {slash_report:?}");
};
crate::Canonical::send(
&mut txn,
@@ -156,10 +148,12 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
);
}
for network in serai_client::primitives::NETWORKS {
for network in ExternalNetworkId::all() {
let mut batch = None;
for this_batch in &block.batch_events {
let serai_client::in_instructions::InInstructionsEvent::Batch {
// Only irrefutable as this is the only member of the enum at this time
#[expect(irrefutable_let_patterns)]
let abi::in_instructions::Event::Batch {
network: batch_network,
publishing_session,
id,
@@ -177,7 +171,7 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
batch = Some(ExecutedBatch {
id: *id,
publisher: *publishing_session,
external_network_block_hash: *external_network_block_hash,
external_network_block_hash: external_network_block_hash.0,
in_instructions_hash: *in_instructions_hash,
in_instruction_results: in_instruction_results
.iter()
@@ -190,15 +184,20 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
})
.collect(),
});
if LastIndexedBatchId::get(&txn, network) != id.checked_sub(1) {
panic!(
"next batch from Serai's ID was not an increment of the last indexed batch's ID"
);
}
LastIndexedBatchId::set(&mut txn, network, id);
}
}
let mut burns = vec![];
for burn in &block.burn_events {
let serai_client::coins::CoinsEvent::BurnWithInstruction { from: _, instruction } =
&burn
else {
panic!("Burn event wasn't a Burn.in event: {burn:?}");
let abi::coins::Event::BurnWithInstruction { from: _, instruction } = &burn else {
panic!("BurnWithInstruction event wasn't a BurnWithInstruction event: {burn:?}");
};
if instruction.balance.coin.network() == network {
burns.push(instruction.clone());
@@ -219,3 +218,7 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
}
}
}
pub(crate) fn last_indexed_batch_id(txn: &impl DbTxn, network: ExternalNetworkId) -> Option<u32> {
LastIndexedBatchId::get(txn, network)
}

Some files were not shown because too many files have changed in this diff Show More