149 Commits

Author SHA1 Message Date
Luke Parker
9a75f92864 Thoroughly update versions and methodology
For hash-pinned dependencies, adds comments documenting the associated
versions.

Adds a pin to `slither-analyzer` which was prior missing.

Updates to Monero 0.18.4.4.

`mimalloc` now has the correct option set when building for `musl`. A C++
compiler is no longer required in its Docker image.

The runtime's `Dockerfile` now symlinks a `libc.so` already present on the
image instead of creating one itself. It also builds the runtime within the
image to ensure it only happens once. The test to ensure the methodology is
reproducible has been updated to not simply create containers from the image,
yet rebuild the image entirely, accordingly. This also is more robust and
arguably should have already been done.

The pin to the exact hash of the `patch-polkadot-sdk` repo in every
`Cargo.toml` has been removed. The lockfile already serves that role,
simplifying updating in the future.

The latest Rust nightly is adopted as well (superseding
https://github.com/serai-dex/serai/pull/697).

The `librocksdb-sys` patch is replaced with a `kvdb-rocksdb` patch, removing a
git dependency, thanks to https://github.com/paritytech/parity-common/pull/950.
2025-12-01 18:17:01 -05:00
Luke Parker
30ea9d9a06 Tidy the DEX pallet 2025-11-30 21:42:27 -05:00
Luke Parker
c45c973ca1 Remove musl-dev from runtime/Dockerfile
It wasn't pinned with a hash yet with a version tag. This ensures we are
deterministic to the image (specified by hash), `Cargo.lock`, and source code
alone.

Unfortunately, this was incredibly annoying to do, the exact process uncovering
a SIGSEGV in stable Rust. The extensive documentation details the solution.
Thankfully, it works now.
2025-11-27 03:37:37 -05:00
Luke Parker
6e37ac030d Add patch for alloy-eip2124 to an empty crate
Removes the `crc` dependency which had a unique author associated.
2025-11-26 17:01:03 -05:00
Luke Parker
e7c759c468 Improve substrate-median tests
The use of a dedicated test module ensures the API doesn't hide anything which
needs to be public. There's also now explicit tests for when the median is the
popped value.
2025-11-25 23:46:12 -05:00
Luke Parker
8ec0582237 Add module to calculate medians 2025-11-25 22:39:52 -05:00
Luke Parker
8d8e8a7a77 Remove unnecessary MSRVs from patches/ 2025-11-25 17:05:30 -05:00
Luke Parker
028ec3cce0 borsh 1.6.0
Bumps th MSRV for some of our crates, which is fine.
2025-11-25 16:58:19 -05:00
Luke Parker
c49215805f Update Substrate 2025-11-25 00:06:54 -05:00
Luke Parker
2ffdd2a01d Update monero-oxide, Substrate 2025-11-22 11:49:25 -05:00
Luke Parker
e1e6e67d4a Ensure desired pruning behavior is held within the node 2025-11-18 21:46:58 -05:00
Luke Parker
6b19780c7b Remove historical state access from Serai
Resolves https://github.com/serai-dex/serai/issues/694.
2025-11-18 21:27:47 -05:00
Luke Parker
6100c3ca90 Restore patches/dalek-ff-group
Ensures `crypto/dalek-ff-group` is pure.
2025-11-16 19:04:57 -05:00
Luke Parker
fa0ed4b180 Add validator sets RPC functions necessary for the coordinator 2025-11-16 17:38:08 -05:00
Luke Parker
0ea16f9e01 doc_auto_cfg -> doc_cfg 2025-11-16 17:38:08 -05:00
Luke Parker
7a314baa9f Update all of serai-coordinator to compile with the new serai-client-serai 2025-11-16 17:38:03 -05:00
Luke Parker
9891ccade8 Add From<*::Call> for Call to serai-abi 2025-11-16 16:43:06 -05:00
Luke Parker
f1f166c168 Restore publish_transaction RPC to Serai 2025-11-16 16:43:06 -05:00
Luke Parker
df4aee2d59 Update serai-client to solely be an umbrella crate of the dedicated client libraries 2025-11-16 16:43:05 -05:00
Luke Parker
302a43653f Add helper to get TemporalSerai as of the latest finalized block 2025-11-16 16:42:36 -05:00
Luke Parker
d219b77bd0 Add bindings to the events from the coins module to serai-client-serai 2025-11-15 16:13:25 -05:00
Luke Parker
fce26eaee1 Update the block RPCs to return null when missing, not an error
Promotes clarity.
2025-11-15 16:12:39 -05:00
Luke Parker
3cfbd9add7 Update serai-coordinator-cosign to the new serai-client-serai 2025-11-15 16:10:58 -05:00
Luke Parker
609cf06393 Update SetDecided to include the validators
Necessary for light-client protocols to follow along with consensus. Arguably,
anyone handling GRANDPA's consensus could peek at this through the consensus
commit anyways, but we shouldn't so defer that.
2025-11-15 14:59:33 -05:00
Luke Parker
46b1f1b7ec Add test for the integrity of headers 2025-11-14 12:04:21 -05:00
Luke Parker
09113201e7 Fixes to the validator sets RPC 2025-11-14 11:19:02 -05:00
Luke Parker
556d294157 Add pallet-timestamp
Ensures the timestamp is sent, within expected parameters, and the correctness
in relation to `pallet-babe`.
2025-11-14 09:59:32 -05:00
Luke Parker
82ca889ed3 Wrap Proposer so we can add the SeraiPreExecutionDigest (timestamps) 2025-11-14 08:02:54 -05:00
Luke Parker
cde0f753c2 Correct Serai header on genesis
Includes a couple misc fixes for the RPC as well.
2025-11-14 07:22:59 -05:00
Luke Parker
6ff0ef7aa6 Type the errors yielded by serai-node's RPC 2025-11-14 03:52:35 -05:00
Luke Parker
f9e3d1b142 Expand validator sets API with the rest of the events and some getters
We could've added a storage API, and fetched fields that way, except we want
the storage to be opaque. That meant we needed to add the RPC routes to the
node, which also simplifies other people writing RPC code and fetching these
fields. Then the node could've used the storage API, except a lot of the
storage in validator-sets is marked opaque and to only be read via functions,
so extending the runtime made the most sense.
2025-11-14 03:37:06 -05:00
Luke Parker
a793aa18ef Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-13 05:48:18 -05:00
Luke Parker
5662beeb8a Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-13 05:25:41 -05:00
Luke Parker
509bd58f4e Add method to fetch a block's events to the RPC 2025-11-13 05:13:55 -05:00
Luke Parker
367a5769e8 Update deny.toml 2025-11-13 00:37:51 -05:00
Luke Parker
cb6eb6430a Update version of substrate 2025-11-13 00:17:19 -05:00
Luke Parker
4f82e5912c Use Alpine to build the runtime
Smaller, works without issue.
2025-11-12 23:02:22 -05:00
Luke Parker
ac7af40f2e Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-12 23:00:57 -05:00
Luke Parker
264bdd46ca Update serai-runtime to compile a minimum subset of itself for non-WASM targets
We only really care about it as a WASM blob, given `serai-abi`, so there's no
need to compile it twice when it's an expensive blob and we don't care about it
at all.
2025-11-12 23:00:00 -05:00
Luke Parker
c52f7634de Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-11 23:04:05 -05:00
Luke Parker
21eaa5793d Error when not running the dev network if an explicit WASM runtime isn't provided 2025-11-11 23:02:20 -05:00
Luke Parker
c744a80d80 Check the finalized function doesn't claim unfinalized blocks are in fact finalized 2025-11-11 23:02:20 -05:00
Luke Parker
a34f9f6164 Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-11 23:02:20 -05:00
Luke Parker
353683cfd2 revm 33 2025-11-11 23:02:16 -05:00
Luke Parker
d4f77159c4 Rust 1.91.1 due to the regression re: wasm builds 2025-11-11 09:08:30 -05:00
Luke Parker
191bf4bdea Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-10 06:34:33 -05:00
Luke Parker
06a4824aba Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-10 05:31:13 -05:00
Luke Parker
e65a37e639 Update various versions 2025-11-10 04:02:02 -05:00
Luke Parker
4653ef4a61 Use dockertest for the newly added serai-client-serai test 2025-11-07 02:08:02 -05:00
Luke Parker
ce08fad931 Add initial basic tests for serai-client-serai 2025-11-06 20:12:37 -05:00
Luke Parker
1866bb7ae3 Begin work on the new RPC for the new node 2025-11-06 03:08:43 -05:00
Luke Parker
aff2065c31 Polkadot stable2509-1 2025-11-06 00:23:35 -05:00
Luke Parker
7300700108 Update misc versions 2025-11-05 19:11:33 -05:00
Luke Parker
31874ceeae serai-node which compiles and produces/finalizes blocks with --dev 2025-11-05 18:20:23 -05:00
Luke Parker
012b8fddae Get serai-node to compile again 2025-11-05 01:18:21 -05:00
Luke Parker
d2f58232c8 Tweak serai-coordinator-cosign to make it closer to compiling again
Adds `PartialOrd, Ord` derivations to some items in `serai-primitives` so they
may be used as keys within `borsh` maps.
2025-11-04 19:42:23 -05:00
Luke Parker
49794b6a75 Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-04 19:28:26 -05:00
Luke Parker
973287d0a1 Smash serai-client so the processors don't need the entire lib to access their specific code
We prior controlled this with feature flags. It's just better to define their
own crates.
2025-11-04 19:27:53 -05:00
Luke Parker
1b499edfe1 Misc fixes so this compiles 2025-11-04 18:56:56 -05:00
Luke Parker
642848bd24 Bump revm 2025-11-04 13:31:46 -05:00
Luke Parker
f7fb78bdd6 Merge branch 'next' into next-polkadot-sdk 2025-11-04 13:24:00 -05:00
Luke Parker
65613750e1 Merge branch 'next' into next-polkadot-sdk 2025-11-04 12:06:13 -05:00
Luke Parker
56f6ba2dac Merge branch 'develop' into next-polkadot-sdk 2025-10-05 06:04:17 -04:00
Luke Parker
08f6af8bb9 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-09-27 02:07:18 -04:00
Luke Parker
3512b3832d Don't actually define a pallet for pallet-session
We need its `Config`, not it as a pallet. As it has no storage, no calls, no
events, this is fine.
2025-09-26 23:08:38 -04:00
Luke Parker
1164f92ea1 Update usage of now-associated const in processor/key-gen 2025-09-26 22:48:52 -04:00
Luke Parker
0a3ead0e19 Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-09-26 22:47:47 -04:00
Luke Parker
ea66cd0d1a Update build-dependencies CI action 2025-09-21 15:40:15 -04:00
Luke Parker
8b32fba458 Minor cargo update 2025-09-21 15:40:05 -04:00
Luke Parker
e63acf3f67 Restore a runtime which compiles
Adds BABE, GRANDPA, to the runtime definition and a few stubs for not yet
implemented interfaces.
2025-09-21 13:16:43 -04:00
Luke Parker
d373d2a4c9 Restore AllowMint to serai-validator-sets-pallet and reorganize TODOs 2025-09-20 04:45:28 -04:00
Luke Parker
cbf998ff30 Restore report_slashes
This does not yet handle the `SlashReport`. It solely handles the routing for
it.
2025-09-20 04:16:01 -04:00
Luke Parker
ef07253a27 Restore the event to serai-validator-sets-pallet 2025-09-20 03:42:24 -04:00
Luke Parker
ffae6753ec Restore the set_keys call 2025-09-20 03:04:26 -04:00
Luke Parker
a04215bc13 Remove commented-out slashing code from serai-validator-sets-pallet
Deferred to https://github.com/serai-dex/serai/issues/657.
2025-09-20 03:04:19 -04:00
Luke Parker
28aea8a442 Incorporate check a validator won't prevent ever not having a single point of failure 2025-09-20 01:58:39 -04:00
Luke Parker
7b46477ca0 Add explicit hook for deciding whether to include the genesis validators 2025-09-20 01:57:55 -04:00
Luke Parker
e62b62ddfb Restore usage of pallet-grandpa to serai-validator-sets-pallet 2025-09-20 01:36:11 -04:00
Luke Parker
a2d8d0fd13 Restore integration with pallet-babe to serai-validator-sets-pallet 2025-09-20 01:23:02 -04:00
Luke Parker
b2b36b17c4 Restore GenesisConfig to the validator sets pallet 2025-09-20 00:06:19 -04:00
Luke Parker
9de8394efa Emit events within the signals pallet 2025-09-19 22:44:29 -04:00
Luke Parker
3cb9432daa Have the coins pallet emit events via serai_core_pallet
`serai_core_pallet` solely defines an accumulator for the events. We use the
traditional `frame_system::Events` to store them for now and enable retrieval.
2025-09-19 22:18:55 -04:00
Luke Parker
3f5150b3fa Properly define the core pallet instead of placing it within the runtime 2025-09-19 19:05:47 -04:00
Luke Parker
d74b00b9e4 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:09:22 -04:00
Luke Parker
3955f92cc2 Merge branch 'next' into next-polkadot-sdk 2025-09-18 18:19:14 -04:00
Luke Parker
a1ef18a039 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:03:16 -04:00
Luke Parker
bec806230a Misc updates 2025-09-18 16:25:33 -04:00
Luke Parker
8bafeab5b3 Tidy serai-signals-pallet
Adds `serai-validator-sets-pallet` and `serai-signals-pallet` to the runtime.
2025-09-16 08:45:02 -04:00
Luke Parker
3722df7326 Introduce KeyShares struct to represent the amount of key shares
Improvements, bug fixes associated.
2025-09-16 08:45:02 -04:00
Luke Parker
ddb8e1398e Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-16 08:45:02 -04:00
Luke Parker
2be69b23b1 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-16 08:45:02 -04:00
Luke Parker
a82ccadbb0 Correct std-shims feature flagging 2025-09-16 08:45:02 -04:00
Luke Parker
1ff2934927 cargo update 2025-09-16 08:44:54 -04:00
Luke Parker
cd4ffa862f Remove coins, validator-sets use of Substrate's event system
We've defined our own.
2025-09-15 21:32:20 -04:00
Luke Parker
c0a4d85ae6 Restore claim_deallocation call to validator-sets pallet 2025-09-15 21:32:01 -04:00
Luke Parker
55e845fe12 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-15 21:24:10 -04:00
Luke Parker
5ea087d177 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-14 08:55:40 -04:00
Luke Parker
dd7dc0c1dc Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-12 18:26:27 -04:00
Luke Parker
c83fbb3e44 Expand std_shims::prelude to better match std::prelude 2025-09-12 18:24:56 -04:00
Luke Parker
befbbbfb84 Add the ability to bound the response's size limit to simple-request 2025-09-11 17:24:47 -04:00
Luke Parker
d0f497dc68 Latest patch-polkadot-sdk 2025-09-10 10:02:24 -04:00
Luke Parker
1b755a5d48 patch-polkadot-sdk enabling libp2p 0.56 2025-09-06 17:41:49 -04:00
Luke Parker
e5efcd56ba Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-06 14:43:21 -04:00
Luke Parker
5d60b3c2ae Update parity-db in serai-db
This synchronizes with an update to `patch-polkadot-sdk`.
2025-09-06 14:28:42 -04:00
Luke Parker
ae923b24ff Update `patch-polkadot-sdk
Allows using `libp2p 0.55`.
2025-09-06 14:04:55 -04:00
Luke Parker
d304cd97e1 Merge branch 'next' into next-polkadot-sdk 2025-09-06 04:26:10 -04:00
Luke Parker
2b56dcdf3f Update patch-polkadot-sdk for bug fixes, removal of is-terminal
Adds a deny entry for `is-terminal` to stop it from secretly reappearing.

Restores the `is-terminal` patch for `is_terminal_polyfill` to have one less
external dependency.
2025-09-06 04:25:21 -04:00
Luke Parker
90804c4c30 Update deny.toml 2025-09-05 14:08:04 -04:00
Luke Parker
46caca2f51 Update patch-polkadot-sdk to remove scale_info 2025-09-05 14:07:52 -04:00
Luke Parker
2077e485bb Add borsh impls for SignedEmbeddedEllipticCurveKeys 2025-09-05 07:21:07 -04:00
Luke Parker
28dbef8a1c Update to the latest patch-polkadot-sdk
Removes several dependencies.
2025-09-05 06:57:30 -04:00
Luke Parker
3541197aa5 Merge branch 'next' into next-polkadot-sdk 2025-09-03 16:44:26 -04:00
Luke Parker
a2209dd6ff Misc clippy fixes 2025-09-03 06:10:54 -04:00
Luke Parker
2032cf355f Expose coins::Pallet::transfer_internal as transfer_fn
It is safe to call and assumes no preconditions.
2025-09-03 00:48:17 -04:00
Luke Parker
fe41b09fd4 Properly handle the error in validator-sets 2025-09-02 11:07:45 -04:00
Luke Parker
74bad049a7 Add abstraction for the embedded elliptic curve keys
It's minimal but still pleasant.
2025-09-02 10:42:06 -04:00
Luke Parker
72fefb3d85 Strongly type EmbeddedEllipticCurveKeys
Adds a signed variant to validate knowledge and ownership.

Add SCALE derivations for `EmbeddedEllipticCurveKeys`
2025-09-02 10:42:02 -04:00
Luke Parker
200c1530a4 WIP changes to validator-sets
Actually use the added `Allocations` abstraction

Start using the sessions API in the validator-sets pallet

Get a `substrate/validator-sets` approximate to compiling
2025-09-02 10:41:58 -04:00
Luke Parker
5736b87b57 Remove final references to scale in coordinator/processor
Slight tweaks to processor
2025-09-02 10:41:55 -04:00
Luke Parker
ada94e8c5d Get all processors to compile again
Requires splitting `serai-cosign` into `serai-cosign` and `serai-cosign-types`
so the processor don't require `serai-client/serai` (not correct yet).
2025-09-02 02:17:10 -04:00
Luke Parker
75240ed327 Update serai-message-queue to the new serai-primitives 2025-09-02 02:17:10 -04:00
Luke Parker
6177cf5c07 Have serai-runtime compile again 2025-09-02 02:17:10 -04:00
Luke Parker
0d38dc96b6 Use serai-primitives, not serai-client, when possible in coordinator/*
Also updates `serai-coordinator-tributary` to prefer `borsh` to SCALE.
2025-09-02 02:17:10 -04:00
Luke Parker
e8094523ff Use borsh instead of SCALE within tendermint-machine, tributary-sdk
Not only does this follow our general practice, the latest SCALE has a
possibly-lossy truncation in its current implementation for `enum`s I'd like to
avoid without simply silencing.
2025-09-02 02:17:09 -04:00
Luke Parker
53a64bc7e2 Update serai-abi, and dependencies, to patch-polkadot-sdk 2025-09-02 02:17:09 -04:00
Luke Parker
3c6e889732 Update Cargo.lock after rebase 2025-08-30 19:36:46 -04:00
Luke Parker
354efc0192 Add deallocate function to validator-sets session abstraction 2025-08-30 18:34:20 -04:00
Luke Parker
e20058feae Add a Sessions abstraction for validator-sets storage 2025-08-30 18:34:20 -04:00
Luke Parker
09f0714894 Add a dedicated Allocations struct for managing validator set allocations
Part of the DB abstraction necessary for this spaghetti.
2025-08-30 18:34:15 -04:00
Luke Parker
d3d539553c Restore the coins pallet to the runtime 2025-08-30 18:32:26 -04:00
Luke Parker
b08ae8e6a7 Add a non-canonical SCALE derivations feature
Enables representing IUMT within `StorageValues`. Applied to a variety of
values.

Fixes a bug where `Some([0; 32])` would be considered a valid block anchor.
2025-08-30 18:32:21 -04:00
Luke Parker
35db2924b4 Populate UnbalancedMerkleTrees in headers 2025-08-30 18:32:20 -04:00
Luke Parker
bfff823bf7 Add an UnbalancedMerkleTree primitive
The reasoning for it is documented with itself. The plan is to use it within
our header for committing to the DAG (allowing one header per epoch, yet
logarithmic proofs for any header within the epoch), the transactions
commitment (allowing logarithmic proofs of a transaction within a block,
without padding), and the events commitment (allowing logarithmic proofs of
unique events within a block, despite events not having a unique ID inherent).

This also defines transaction hashes and performs the necessary modifications
for transactions to be unique.
2025-08-30 18:32:16 -04:00
Luke Parker
352af85498 Use borsh entirely in create_db 2025-08-30 18:32:07 -04:00
Luke Parker
ecad89b269 Remove now-consolidated primitives crates 2025-08-30 18:32:06 -04:00
Luke Parker
48f5ed71d7 Skeleton ruintime with new types 2025-08-30 18:30:38 -04:00
Luke Parker
ed9cbdd8e0 Have apply return Ok even if calls failed
This ensures fees are paid, and block building isn't interrupted, even for TXs
which error.
2025-08-30 18:27:23 -04:00
Luke Parker
0ac11defcc Serialize BoundedVec not with a u32 length, but the minimum-viable uN where N%8==0
This does break borsh's definition of a Vec EXCEPT if the BoundedVec is
considered an enum. For sufficiently low bounds, this is viable, though it
requires automated code generation to be sane.
2025-08-30 18:27:23 -04:00
Luke Parker
24e89316d5 Correct distinction/flow of check/validate/apply 2025-08-30 18:27:23 -04:00
Luke Parker
3f03dac050 Make transaction an enum of Unsigned, Signed 2025-08-30 18:27:23 -04:00
Luke Parker
820b710928 Remove RuntimeCall from Transaction
I believe this was originally here as we needed to return a reference, not an
owned instance, so this caching enabled returning a reference? Regardless, it
isn't valuable now.
2025-08-30 18:27:23 -04:00
Luke Parker
88c7ae3e7d Add traits necessary for serai_abi::Transaction to be usable in-runtime 2025-08-30 18:27:22 -04:00
Luke Parker
dd5e43760d Add the UNIX timestamp (in milliseconds to the block
This is read from the BABE pre-digest when converting from a SubstrateHeader.
This causes the genesis block to have time 0 and all blocks produced with BABE
to have a time of the slot time. While the slot time is in 6-second intervals
(due to our target block time), defining in milliseconds preserves the ABI for
long-term goals (sub-second blocks).

Usage of the slot time deduplicates this field with BABE, and leaves the only
possible manipulation to propose during a slot or to not propose during a slot.

The actual reason this was implemented this way is because the Header trait is
overly restrictive and doesn't allow definition with new fields. Even if we
wanted to express the timestamp within the SubstrateHeader, we can't without
replacing Header::new and making a variety of changes to the polkadot-sdk
accordingly. Those aren't worth it at this moment compared to the solution
implemented.
2025-08-30 18:27:09 -04:00
Luke Parker
776e417fd2 Redo primitives, abi
Consolidates all primitives into a single crate. We didn't benefit from its
fragmentation. I'm hesitant to say the new internal-organization is better (it
may be just as clunky), but it's at least in a single crate (not spread out
over micro-crates).

The ABI is the most distinct. We now entirely own it. Block header hashes don't
directly commit to any BABE data (avoiding potentially ~4 KB headers upon
session changes), and are hashed as borsh (a more widely used codec than
SCALE). There are still Substrate variants, using SCALE and with the BABE data,
but they're prunable from a protocol design perspective.

Defines a transaction as a Vec of Calls, allowing atomic operations.
2025-08-30 18:26:37 -04:00
Luke Parker
2f8ce15a92 Update deny, rust-src component 2025-08-30 18:25:02 -04:00
Luke Parker
af56304676 Update the git tags
Does no actual migration work. This allows establishing the difference in
dependencies between substrate and polkadot-sdk/substrate.
2025-08-30 18:23:49 -04:00
Luke Parker
62a2c4f20e Update nightly version 2025-08-30 18:22:48 -04:00
Luke Parker
c69841710a Remove unnecessary to_string for clone 2025-08-30 18:08:08 -04:00
Luke Parker
3158590675 Remove unused patch for parking_lot_core 2025-08-30 16:20:29 -04:00
418 changed files with 14803 additions and 17663 deletions

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: "29.1"
default: "30.0"
runs:
using: "composite"
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -52,9 +52,9 @@ runs:
- name: Install solc
shell: bash
run: |
cargo +1.90 install svm-rs --version =0.5.19
svm install 0.8.26
svm use 0.8.26
cargo +1.91.1 install svm-rs --version =0.5.21
svm install 0.8.29
svm use 0.8.29
- name: Remove preinstalled Docker
shell: bash
@@ -75,11 +75,8 @@ runs:
if: runner.os == 'Linux'
- name: Install rootless Docker
uses: docker/setup-docker-action@b60f85385d03ac8acfca6d9996982511d8620a19
uses: docker/setup-docker-action@e61617a16c407a86262fb923c35a616ddbe070b3 # 4.6.0
with:
rootless: true
set-host: true
if: runner.os == 'Linux'
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.4
default: v0.18.4.4
runs:
using: "composite"
steps:
- name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.4
default: v0.18.4.4
runs:
using: "composite"
steps:
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with:
path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,12 +5,12 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.3.4
default: v0.18.4.4
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: "29.1"
default: "30.0"
runs:
using: "composite"
@@ -19,9 +19,9 @@ runs:
uses: ./.github/actions/build-dependencies
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with:
version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9
version: v1.5.0
cache: false
- name: Run a Monero Regtest Node

View File

@@ -1 +1 @@
nightly-2025-11-01
nightly-2025-12-01

View File

@@ -17,7 +17,7 @@ jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -31,7 +31,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -19,7 +19,7 @@ jobs:
test-crypto:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -9,16 +9,10 @@ jobs:
name: Run cargo deny
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install cargo deny
run: cargo +1.90 install cargo-deny --version =0.18.4
run: cargo +1.91.1 install cargo-deny --version =0.18.6
- name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -13,7 +13,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use
id: nightly
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-src -c clippy
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -43,16 +43,10 @@ jobs:
deny:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install cargo deny
run: cargo +1.90 install cargo-deny --version =0.18.4
run: cargo +1.91.1 install cargo-deny --version =0.18.6
- name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph
@@ -60,7 +54,7 @@ jobs:
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use
id: nightly
@@ -73,10 +67,10 @@ jobs:
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with:
version: nightly-41d4e5437107f6f42c7711123890147bc736a609
version: v1.5.0
cache: false
- name: Run forge fmt
@@ -85,20 +79,20 @@ jobs:
machete:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify all dependencies are in use
run: |
cargo +1.90 install cargo-machete --version =0.9.1
cargo +1.90 machete
cargo +1.91.1 install cargo-machete --version =0.9.1
cargo +1.91.1 machete
msrv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify claimed `rust-version`
shell: bash
run: |
cargo +1.90 install cargo-msrv --version =0.18.4
cargo +1.91.1 install cargo-msrv --version =0.18.4
function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the
@@ -155,8 +149,6 @@ jobs:
# Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the patches
members=$(echo "$members" | grep -v "patches")
# Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline
@@ -191,16 +183,16 @@ jobs:
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither
run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer==0.11.3
python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol

View File

@@ -27,7 +27,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -17,7 +17,7 @@ jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
with:
submodules: "recursive"

View File

@@ -21,7 +21,7 @@ jobs:
test-networks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Test Dependencies
uses: ./.github/actions/test-dependencies

View File

@@ -23,7 +23,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -46,16 +46,16 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Setup Ruby
uses: ruby/setup-ruby@44511735964dcb71245e7e55f72539531f7bc0eb
uses: ruby/setup-ruby@8aeb6ff8030dd539317f8e1769a044873b56ea71 # 1.268.0
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # 5.0.0
- name: Build with Jekyll
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
@@ -74,7 +74,7 @@ jobs:
mv target/doc docs/_site/rust
- name: Upload artifact
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # 4.0.0
with:
path: "docs/_site/"
@@ -88,4 +88,4 @@ jobs:
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # 4.0.5

View File

@@ -31,7 +31,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -27,7 +27,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies

View File

@@ -29,7 +29,7 @@ jobs:
test-infra:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
@@ -61,6 +61,7 @@ jobs:
-p serai-monero-processor \
-p tendermint-machine \
-p tributary-sdk \
-p serai-cosign-types \
-p serai-cosign \
-p serai-coordinator-substrate \
-p serai-coordinator-tributary \
@@ -73,7 +74,7 @@ jobs:
test-substrate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
@@ -82,31 +83,33 @@ jobs:
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \
-p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \
-p substrate-median \
-p serai-core-pallet \
-p serai-coins-pallet \
-p serai-validator-sets-pallet \
-p serai-signals-pallet \
-p serai-dex-pallet \
-p serai-genesis-liquidity-pallet \
-p serai-economic-security-pallet \
-p serai-emissions-pallet \
-p serai-in-instructions-pallet \
-p serai-runtime \
-p serai-node
-p serai-substrate-tests
test-serai-client:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-serai
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-bitcoin
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-ethereum
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-monero
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

3
.gitignore vendored
View File

@@ -2,11 +2,10 @@ target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!./Cargo.lock
!/Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs

2717
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -62,13 +62,14 @@ members = [
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/router",
"processor/ethereum/erc20",
"processor/ethereum/router",
"processor/ethereum",
"processor/monero",
"coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk",
"coordinator/cosign/types",
"coordinator/cosign",
"coordinator/substrate",
"coordinator/tributary",
@@ -77,34 +78,27 @@ members = [
"coordinator",
"substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi",
"substrate/median",
"substrate/core",
"substrate/coins",
"substrate/validator-sets",
"substrate/signals",
"substrate/dex",
"substrate/genesis-liquidity",
"substrate/economic-security",
"substrate/emissions",
"substrate/in-instructions",
"substrate/runtime",
"substrate/node",
"substrate/client/serai",
"substrate/client/bitcoin",
"substrate/client/ethereum",
"substrate/client/monero",
"substrate/client",
"orchestration",
@@ -117,6 +111,7 @@ members = [
"tests/message-queue",
# TODO "tests/processor",
# TODO "tests/coordinator",
"tests/substrate",
# TODO "tests/full-stack",
"tests/reproducible-runtime",
]
@@ -138,11 +133,14 @@ dalek-ff-group = { opt-level = 3 }
multiexp = { opt-level = 3 }
monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 }
monero-io = { opt-level = 3 }
monero-primitives = { opt-level = 3 }
monero-ed25519 = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs-generators = { opt-level = 3 }
monero-bulletproofs = {opt-level = 3 }
monero-oxide = { opt-level = 3 }
# Always compile the eVRF DKG tree with optimizations as well
@@ -172,7 +170,14 @@ panic = "unwind"
overflow-checks = true
[patch.crates-io]
# Dependencies from monero-oxide which originate from within our own tree
# Point to empty crates for crates unused within in our tree
alloy-eip2124 = { path = "patches/ethereum/alloy-eip2124" }
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
# Dependencies from monero-oxide which originate from within our own tree, potentially shimmed to account for deviations since publishing
std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" }
@@ -182,12 +187,18 @@ dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" }
# Patch due to `std` now including the required functionality
is_terminal_polyfill = { path = "./patches/is_terminal_polyfill" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# These has an `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers
@@ -196,16 +207,22 @@ home = { path = "patches/home" }
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# Patch to include `FromUniformBytes<64>` over Scalar
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# `jemalloc` conflicts with `mimalloc`, so patch to a `kvdb-rocksdb` which never exposes `jemalloc`
kvdb-rocksdb = { path = "patches/kvdb-rocksdb" }
[workspace.lints.clippy]
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"
unwrap_or_default = "allow"
map_unwrap_or = "allow"
needless_continue = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.65"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true
@@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
parity-db = { version = "0.5", default-features = false, optional = true }
parity-db = { version = "0.5", default-features = false, features = ["arc"], optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features]

View File

@@ -15,7 +15,7 @@ pub fn serai_db_key(
///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `scale` encodable. The return type is used to auto encode and decode the database
/// they must be `borsh` serializable. The return type is used to auto (de)serialize the database
/// value bytes using `borsh`.
///
/// # Arguments
@@ -54,11 +54,10 @@ macro_rules! create_db {
)?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key(
stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(),
($($arg),*).encode()
&borsh::to_vec(&($($arg),*)).unwrap(),
)
}
pub(crate) fn set(

View File

@@ -35,6 +35,9 @@ mod mutex_shim {
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[rustversion::before(1.80)]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]

View File

@@ -31,7 +31,6 @@ frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" }
hex = { version = "0.4", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
zalloc = { path = "../common/zalloc" }
@@ -43,7 +42,7 @@ messages = { package = "serai-processor-messages", path = "../processor/messages
message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary-sdk = { path = "./tributary-sdk" }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-client-serai = { path = "../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
env_logger = { version = "0.10", default-features = false, features = ["humantime"] }

View File

@@ -19,11 +19,9 @@ workspace = true
[dependencies]
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
@@ -31,3 +29,5 @@ tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" }
serai-cosign-types = { path = "./types" }

View File

@@ -1,10 +1,21 @@
use core::future::Future;
use std::{sync::Arc, collections::HashMap};
use serai_client::{
primitives::{SeraiAddress, Amount},
validator_sets::primitives::ExternalValidatorSet,
Serai,
use blake2::{Digest, Blake2b256};
use serai_client_serai::{
abi::{
primitives::{
network_id::{ExternalNetworkId, NetworkId},
balance::Amount,
crypto::Public,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
merkle::IncrementalUnbalancedMerkleTree,
},
validator_sets::Event,
},
Serai, Events,
};
use serai_db::*;
@@ -12,9 +23,20 @@ use serai_task::ContinuallyRan;
use crate::*;
#[derive(BorshSerialize, BorshDeserialize)]
struct Set {
session: Session,
key: Public,
stake: Amount,
}
create_db!(
CosignIntend {
ScanCosignFrom: () -> u64,
BuildsUpon: () -> IncrementalUnbalancedMerkleTree,
Stakes: (network: ExternalNetworkId, validator: SeraiAddress) -> Amount,
Validators: (set: ExternalValidatorSet) -> Vec<SeraiAddress>,
LatestSet: (network: ExternalNetworkId) -> Set,
}
);
@@ -35,23 +57,38 @@ db_channel! {
async fn block_has_events_justifying_a_cosign(
serai: &Serai,
block_number: u64,
) -> Result<(Block, HasEvents), String> {
) -> Result<(Block, Events, HasEvents), String> {
let block = serai
.finalized_block_by_number(block_number)
.block_by_number(block_number)
.await
.map_err(|e| format!("{e:?}"))?
.ok_or_else(|| "couldn't get block which should've been finalized".to_string())?;
let serai = serai.as_of(block.hash());
let events = serai.events(block.header.hash()).await.map_err(|e| format!("{e:?}"))?;
if !serai.validator_sets().key_gen_events().await.map_err(|e| format!("{e:?}"))?.is_empty() {
return Ok((block, HasEvents::Notable));
if events.validator_sets().set_keys_events().next().is_some() {
return Ok((block, events, HasEvents::Notable));
}
if !serai.coins().burn_with_instruction_events().await.map_err(|e| format!("{e:?}"))?.is_empty() {
return Ok((block, HasEvents::NonNotable));
if events.coins().burn_with_instruction_events().next().is_some() {
return Ok((block, events, HasEvents::NonNotable));
}
Ok((block, HasEvents::No))
Ok((block, events, HasEvents::No))
}
// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this
// block.
fn cosigning_sets(getter: &impl Get) -> Vec<(ExternalValidatorSet, Public, Amount)> {
let mut sets = vec![];
for network in ExternalNetworkId::all() {
let Some(Set { session, key, stake }) = LatestSet::get(getter, network) else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ExternalValidatorSet { network, session }, key, stake));
}
sets
}
/// A task to determine which blocks we should intend to cosign.
@@ -67,56 +104,108 @@ impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
async move {
let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1);
let latest_block_number =
self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number();
self.serai.latest_finalized_block_number().await.map_err(|e| format!("{e:?}"))?;
for block_number in start_block_number ..= latest_block_number {
let mut txn = self.db.txn();
let (block, mut has_events) =
let (block, events, mut has_events) =
block_has_events_justifying_a_cosign(&self.serai, block_number)
.await
.map_err(|e| format!("{e:?}"))?;
let mut builds_upon =
BuildsUpon::get(&txn).unwrap_or(IncrementalUnbalancedMerkleTree::new());
// Check we are indexing a linear chain
if (block_number > 1) &&
(<[u8; 32]>::from(block.header.parent_hash) !=
SubstrateBlockHash::get(&txn, block_number - 1)
.expect("indexing a block but haven't indexed its parent"))
if block.header.builds_upon() !=
builds_upon.clone().calculate(serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG)
{
Err(format!(
"node's block #{block_number} doesn't build upon the block #{} prior indexed",
block_number - 1
))?;
}
let block_hash = block.hash();
let block_hash = block.header.hash();
SubstrateBlockHash::set(&mut txn, block_number, &block_hash);
builds_upon.append(
serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG,
Blake2b256::new_with_prefix([serai_client_serai::abi::BLOCK_HEADER_LEAF_TAG])
.chain_update(block_hash.0)
.finalize()
.into(),
);
BuildsUpon::set(&mut txn, &builds_upon);
// Update the stakes
for event in events.validator_sets().allocation_events() {
let Event::Allocation { validator, network, amount } = event else {
panic!("event from `allocation_events` wasn't `Event::Allocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 + amount.0));
}
for event in events.validator_sets().deallocation_events() {
let Event::Deallocation { validator, network, amount, timeline: _ } = event else {
panic!("event from `deallocation_events` wasn't `Event::Deallocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 - amount.0));
}
// Handle decided sets
for event in events.validator_sets().set_decided_events() {
let Event::SetDecided { set, validators } = event else {
panic!("event from `set_decided_events` wasn't `Event::SetDecided`")
};
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
Validators::set(
&mut txn,
set,
&validators.iter().map(|(validator, _key_shares)| *validator).collect(),
);
}
// Handle declarations of the latest set
for event in events.validator_sets().set_keys_events() {
let Event::SetKeys { set, key_pair } = event else {
panic!("event from `set_keys_events` wasn't `Event::SetKeys`")
};
let mut stake = 0;
for validator in
Validators::take(&mut txn, *set).expect("set which wasn't decided set keys")
{
stake += Stakes::get(&txn, set.network, validator).unwrap_or(Amount(0)).0;
}
LatestSet::set(
&mut txn,
set.network,
&Set { session: set.session, key: key_pair.0, stake: Amount(stake) },
);
}
let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn);
// If this is notable, it creates a new global session, which we index into the database
// now
if has_events == HasEvents::Notable {
let serai = self.serai.as_of(block_hash);
let sets_and_keys = cosigning_sets(&serai).await?;
let global_session =
GlobalSession::id(sets_and_keys.iter().map(|(set, _key)| *set).collect());
let sets_and_keys_and_stakes = cosigning_sets(&txn);
let global_session = GlobalSession::id(
sets_and_keys_and_stakes.iter().map(|(set, _key, _stake)| *set).collect(),
);
let mut sets = Vec::with_capacity(sets_and_keys.len());
let mut keys = HashMap::with_capacity(sets_and_keys.len());
let mut stakes = HashMap::with_capacity(sets_and_keys.len());
let mut sets = Vec::with_capacity(sets_and_keys_and_stakes.len());
let mut keys = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut stakes = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut total_stake = 0;
for (set, key) in &sets_and_keys {
sets.push(*set);
keys.insert(set.network, SeraiAddress::from(*key));
let stake = serai
.validator_sets()
.total_allocated_stake(set.network.into())
.await
.map_err(|e| format!("{e:?}"))?
.unwrap_or(Amount(0))
.0;
stakes.insert(set.network, stake);
total_stake += stake;
for (set, key, stake) in sets_and_keys_and_stakes {
sets.push(set);
keys.insert(set.network, key);
stakes.insert(set.network, stake.0);
total_stake += stake.0;
}
if total_stake == 0 {
Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?;

View File

@@ -7,18 +7,27 @@ use std::{sync::Arc, collections::HashMap, time::Instant};
use blake2::{Digest, Blake2s256};
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{
primitives::{ExternalNetworkId, SeraiAddress},
validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair},
Public, Block, Serai, TemporalSerai,
use serai_client_serai::{
abi::{
primitives::{
BlockHash,
crypto::{Public, KeyPair},
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
},
Block,
},
Serai, State,
};
use serai_db::*;
use serai_task::*;
pub use serai_cosign_types::*;
/// The cosigns which are intended to be performed.
mod intend;
/// The evaluator of the cosigns.
@@ -28,9 +37,6 @@ mod delay;
pub use delay::BROADCAST_FREQUENCY;
use delay::LatestCosignedBlockNumber;
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// A 'global session', defined as all validator sets used for cosigning at a given moment.
///
/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign
@@ -53,7 +59,7 @@ pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
pub(crate) struct GlobalSession {
pub(crate) start_block_number: u64,
pub(crate) sets: Vec<ExternalValidatorSet>,
pub(crate) keys: HashMap<ExternalNetworkId, SeraiAddress>,
pub(crate) keys: HashMap<ExternalNetworkId, Public>,
pub(crate) stakes: HashMap<ExternalNetworkId, u64>,
pub(crate) total_stake: u64,
}
@@ -78,74 +84,12 @@ enum HasEvents {
No,
}
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
self.encode()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
fn verify_signature(&self, signer: serai_client::Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}
create_db! {
Cosign {
// The following are populated by the intend task and used throughout the library
// An index of Substrate blocks
SubstrateBlockHash: (block_number: u64) -> [u8; 32],
SubstrateBlockHash: (block_number: u64) -> BlockHash,
// A mapping from a global session's ID to its relevant information.
GlobalSessions: (global_session: [u8; 32]) -> GlobalSession,
// The last block to be cosigned by a global session.
@@ -177,60 +121,6 @@ create_db! {
}
}
/// Fetch the keys used for cosigning by a specific network.
async fn keys_for_network(
serai: &TemporalSerai<'_>,
network: ExternalNetworkId,
) -> Result<Option<(Session, KeyPair)>, String> {
let Some(latest_session) =
serai.validator_sets().session(network.into()).await.map_err(|e| format!("{e:?}"))?
else {
// If this network hasn't had a session declared, move on
return Ok(None);
};
// Get the keys for the latest session
if let Some(keys) = serai
.validator_sets()
.keys(ExternalValidatorSet { network, session: latest_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((latest_session, keys)));
}
// If the latest session has yet to set keys, use the prior session
if let Some(prior_session) = latest_session.0.checked_sub(1).map(Session) {
if let Some(keys) = serai
.validator_sets()
.keys(ExternalValidatorSet { network, session: prior_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((prior_session, keys)));
}
}
Ok(None)
}
/// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this
/// block.
async fn cosigning_sets(
serai: &TemporalSerai<'_>,
) -> Result<Vec<(ExternalValidatorSet, Public)>, String> {
let mut sets = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len());
for network in serai_client::primitives::EXTERNAL_NETWORKS {
let Some((session, keys)) = keys_for_network(serai, network).await? else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ExternalValidatorSet { network, session }, keys.0));
}
Ok(sets)
}
/// An object usable to request notable cosigns for a block.
pub trait RequestNotableCosigns: 'static + Send {
/// The error type which may be encountered when requesting notable cosigns.
@@ -331,7 +221,10 @@ impl<D: Db> Cosigning<D> {
}
/// Fetch a cosigned Substrate block's hash by its block number.
pub fn cosigned_block(getter: &impl Get, block_number: u64) -> Result<Option<[u8; 32]>, Faulted> {
pub fn cosigned_block(
getter: &impl Get,
block_number: u64,
) -> Result<Option<BlockHash>, Faulted> {
if block_number > Self::latest_cosigned_block_number(getter)? {
return Ok(None);
}
@@ -346,8 +239,8 @@ impl<D: Db> Cosigning<D> {
/// If this global session hasn't produced any notable cosigns, this will return the latest
/// cosigns for this session.
pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec<SignedCosign> {
let mut cosigns = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len());
for network in serai_client::primitives::EXTERNAL_NETWORKS {
let mut cosigns = vec![];
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) {
cosigns.push(cosign);
}
@@ -364,7 +257,7 @@ impl<D: Db> Cosigning<D> {
let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults");
// Also include all of our recognized-as-honest cosigns in an attempt to induce fault
// identification in those who see the faulty cosigns as honest
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) {
if cosign.cosign.global_session == faulted {
cosigns.push(cosign);
@@ -376,8 +269,8 @@ impl<D: Db> Cosigning<D> {
let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else {
return vec![];
};
let mut cosigns = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len());
for network in serai_client::primitives::EXTERNAL_NETWORKS {
let mut cosigns = vec![];
for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) {
cosigns.push(cosign);
}
@@ -432,13 +325,8 @@ impl<D: Db> Cosigning<D> {
// Check the cosign's signature
{
let key = Public::from({
let Some(key) = global_session.keys.get(&network) else {
Err(IntakeCosignError::NonParticipatingNetwork)?
};
*key
});
let key =
*global_session.keys.get(&network).ok_or(IntakeCosignError::NonParticipatingNetwork)?;
if !signed_cosign.verify_signature(key) {
Err(IntakeCosignError::InvalidSignature)?;
}

View File

@@ -0,0 +1,25 @@
[package]
name = "serai-cosign-types"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] }

View File

@@ -0,0 +1,72 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![deny(missing_docs)]
//! Types used when cosigning Serai. For more info, please see `serai-cosign`.
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{BlockHash, crypto::Public, network_id::ExternalNetworkId};
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
borsh::to_vec(self).unwrap()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
/// Verify a cosign's signature.
pub fn verify_signature(&self, signer: Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}

View File

@@ -22,7 +22,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive",
serai-db = { path = "../../common/db", version = "0.1" }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] }
serai-cosign = { path = "../cosign" }
tributary-sdk = { path = "../tributary-sdk" }

View File

@@ -29,7 +29,7 @@ schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-client-serai = { path = "../../../substrate/client/serai", default-features = false }
serai-cosign = { path = "../../cosign" }
tributary-sdk = { path = "../../tributary-sdk" }

View File

@@ -7,7 +7,7 @@ use rand_core::{RngCore, OsRng};
use blake2::{Digest, Blake2s256};
use schnorrkel::{Keypair, PublicKey, Signature};
use serai_client::primitives::PublicKey as Public;
use serai_client_serai::abi::primitives::crypto::Public;
use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use libp2p::{
@@ -104,7 +104,7 @@ impl OnlyValidators {
.verify_simple(PROTOCOL.as_bytes(), &msg, &sig)
.map_err(|_| io::Error::other("invalid signature"))?;
Ok(peer_id_from_public(Public::from_raw(public_key.to_bytes())))
Ok(peer_id_from_public(Public(public_key.to_bytes())))
}
}

View File

@@ -1,11 +1,11 @@
use core::future::Future;
use core::{future::Future, str::FromStr};
use std::{sync::Arc, collections::HashSet};
use rand_core::{RngCore, OsRng};
use tokio::sync::mpsc;
use serai_client::{SeraiError, Serai};
use serai_client_serai::{RpcError, Serai};
use libp2p::{
core::multiaddr::{Protocol, Multiaddr},
@@ -50,7 +50,7 @@ impl ContinuallyRan for DialTask {
const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60;
type Error = SeraiError;
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
@@ -94,6 +94,13 @@ impl ContinuallyRan for DialTask {
usize::try_from(OsRng.next_u64() % u64::try_from(potential_peers.len()).unwrap())
.unwrap();
let randomly_selected_peer = potential_peers.swap_remove(index_to_dial);
let Ok(randomly_selected_peer) = libp2p::Multiaddr::from_str(&randomly_selected_peer)
else {
log::error!(
"peer from substrate wasn't a valid `Multiaddr`: {randomly_selected_peer}"
);
continue;
};
log::info!("found peer from substrate: {randomly_selected_peer}");

View File

@@ -13,9 +13,10 @@ use rand_core::{RngCore, OsRng};
use zeroize::Zeroizing;
use schnorrkel::Keypair;
use serai_client::{
primitives::{ExternalNetworkId, PublicKey},
validator_sets::primitives::ExternalValidatorSet,
use serai_client_serai::{
abi::primitives::{
crypto::Public, network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet,
},
Serai,
};
@@ -66,7 +67,7 @@ use dial::DialTask;
const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o')
fn peer_id_from_public(public: PublicKey) -> PeerId {
fn peer_id_from_public(public: Public) -> PeerId {
// 0 represents the identity Multihash, that no hash was performed
// It's an internal constant so we can't refer to the constant inside libp2p
PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap()

View File

@@ -6,7 +6,7 @@ use std::{
use borsh::BorshDeserialize;
use serai_client::validator_sets::primitives::ExternalValidatorSet;
use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use tokio::sync::{mpsc, oneshot, RwLock};

View File

@@ -4,9 +4,8 @@ use std::{
collections::{HashSet, HashMap},
};
use serai_client::{
primitives::ExternalNetworkId, validator_sets::primitives::Session, SeraiError, Serai,
};
use serai_client_serai::abi::primitives::{network_id::ExternalNetworkId, validator_sets::Session};
use serai_client_serai::{RpcError, Serai};
use serai_task::{Task, ContinuallyRan};
@@ -52,7 +51,7 @@ impl Validators {
async fn session_changes(
serai: impl Borrow<Serai>,
sessions: impl Borrow<HashMap<ExternalNetworkId, Session>>,
) -> Result<Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>, SeraiError> {
) -> Result<Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>, RpcError> {
/*
This uses the latest finalized block, not the latest cosigned block, which should be fine as
in the worst case, we'd connect to unexpected validators. They still shouldn't be able to
@@ -61,18 +60,18 @@ impl Validators {
Besides, we can't connect to historical validators, only the current validators.
*/
let temporal_serai = serai.borrow().as_of_latest_finalized_block().await?;
let temporal_serai = temporal_serai.validator_sets();
let serai = serai.borrow().state().await?;
let mut session_changes = vec![];
{
// FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but
// we poll it till it yields all futures with the most minimal processing possible
let mut futures = FuturesUnordered::new();
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
let sessions = sessions.borrow();
let serai = serai.borrow();
futures.push(async move {
let session = match temporal_serai.session(network.into()).await {
let session = match serai.current_session(network.into()).await {
Ok(Some(session)) => session,
Ok(None) => return Ok(None),
Err(e) => return Err(e),
@@ -81,12 +80,16 @@ impl Validators {
if sessions.get(&network) == Some(&session) {
Ok(None)
} else {
match temporal_serai.active_network_validators(network.into()).await {
Ok(validators) => Ok(Some((
match serai.current_validators(network.into()).await {
Ok(Some(validators)) => Ok(Some((
network,
session,
validators.into_iter().map(peer_id_from_public).collect(),
validators
.into_iter()
.map(|validator| peer_id_from_public(validator.into()))
.collect(),
))),
Ok(None) => panic!("network has session yet no validators"),
Err(e) => Err(e),
}
}
@@ -153,7 +156,7 @@ impl Validators {
}
/// Update the view of the validators.
pub(crate) async fn update(&mut self) -> Result<(), SeraiError> {
pub(crate) async fn update(&mut self) -> Result<(), RpcError> {
let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?;
self.incorporate_session_changes(session_changes);
Ok(())
@@ -206,7 +209,7 @@ impl ContinuallyRan for UpdateValidatorsTask {
const DELAY_BETWEEN_ITERATIONS: u64 = 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
type Error = SeraiError;
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {

View File

@@ -1,7 +1,7 @@
use core::future::Future;
use std::time::{Duration, SystemTime};
use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet};
use serai_primitives::validator_sets::{ExternalValidatorSet, KeyShares};
use futures_lite::FutureExt;
@@ -30,7 +30,7 @@ pub const MIN_BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1;
/// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators,
/// and aggregate signature). Accordingly, this should be a safe over-estimate.
pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH *
(tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((MAX_KEY_SHARES_PER_SET as usize) * 128));
(tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((KeyShares::MAX_PER_SET as usize) * 128));
/// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's
/// tip.

View File

@@ -7,7 +7,7 @@ use std::collections::HashMap;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet};
use serai_primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet};
use serai_db::Db;
use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader};

View File

@@ -5,9 +5,10 @@ use serai_db::{create_db, db_channel};
use dkg::Participant;
use serai_client::{
primitives::ExternalNetworkId,
validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair},
use serai_client_serai::abi::primitives::{
crypto::KeyPair,
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
};
use serai_cosign::SignedCosign;
@@ -103,7 +104,7 @@ mod _internal_db {
// Tributary transactions to publish from the DKG confirmation task
TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction,
// Participants to remove
RemoveParticipant: (set: ExternalValidatorSet) -> Participant,
RemoveParticipant: (set: ExternalValidatorSet) -> u16,
}
}
}
@@ -139,10 +140,11 @@ impl RemoveParticipant {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) {
// If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::RemoveParticipant::send(txn, set, &participant);
_internal_db::RemoveParticipant::send(txn, set, &u16::from(participant));
}
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> {
_internal_db::RemoveParticipant::try_recv(txn, set)
.map(|i| Participant::new(i).expect("sent invalid participant index for removal"))
}
}

View File

@@ -12,10 +12,8 @@ use frost_schnorrkel::{
use serai_db::{DbTxn, Db as DbTrait};
use serai_client::{
primitives::SeraiAddress,
validator_sets::primitives::{ExternalValidatorSet, musig_context, set_keys_message},
};
#[rustfmt::skip]
use serai_client_serai::abi::primitives::{validator_sets::ExternalValidatorSet, address::SeraiAddress};
use serai_task::{DoesNotError, ContinuallyRan};
@@ -160,7 +158,7 @@ impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
let (machine, preprocess) = AlgorithmMachine::new(
schnorrkel(),
// We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet
musig(musig_context(set.into()), key, &[public_key]).unwrap(),
musig(ExternalValidatorSet::musig_context(&set), key, &[public_key]).unwrap(),
)
.preprocess(&mut OsRng);
// We take the preprocess so we can use it in a distinct machine with the actual Musig
@@ -260,9 +258,12 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
})
.collect::<Vec<_>>();
let keys =
musig(musig_context(self.set.set.into()), self.key.clone(), &musig_public_keys)
.unwrap();
let keys = musig(
ExternalValidatorSet::musig_context(&self.set.set),
self.key.clone(),
&musig_public_keys,
)
.unwrap();
// Rebuild the machine
let (machine, preprocess_from_cache) =
@@ -296,9 +297,10 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
};
// Calculate our share
let (machine, share) = match handle_frost_error(
machine.sign(preprocesses, &set_keys_message(&self.set.set, &key_pair)),
) {
let (machine, share) = match handle_frost_error(machine.sign(
preprocesses,
&ExternalValidatorSet::set_keys_message(&self.set.set, &key_pair),
)) {
Ok((machine, share)) => (machine, share),
// This yields the *musig participant index*
Err(participant) => {

View File

@@ -14,9 +14,14 @@ use borsh::BorshDeserialize;
use tokio::sync::mpsc;
use serai_client::{
primitives::{ExternalNetworkId, PublicKey, SeraiAddress, Signature},
validator_sets::primitives::{ExternalValidatorSet, KeyPair},
use serai_client_serai::{
abi::primitives::{
BlockHash,
crypto::{Public, Signature, ExternalKey, KeyPair},
network_id::ExternalNetworkId,
validator_sets::ExternalValidatorSet,
address::SeraiAddress,
},
Serai,
};
use message_queue::{Service, client::MessageQueue};
@@ -61,9 +66,7 @@ async fn serai() -> Arc<Serai> {
let Ok(serai) = Serai::new(format!(
"http://{}:9944",
serai_env::var("SERAI_HOSTNAME").expect("Serai hostname wasn't provided")
))
.await
else {
)) else {
log::error!("couldn't connect to the Serai node");
tokio::time::sleep(delay).await;
delay = (delay + SERAI_CONNECTION_DELAY).min(MAX_SERAI_CONNECTION_DELAY);
@@ -213,10 +216,12 @@ async fn handle_network(
&mut txn,
ExternalValidatorSet { network, session },
&KeyPair(
PublicKey::from_raw(substrate_key),
network_key
.try_into()
.expect("generated a network key which exceeds the maximum key length"),
Public(substrate_key),
ExternalKey(
network_key
.try_into()
.expect("generated a network key which exceeds the maximum key length"),
),
),
);
}
@@ -284,12 +289,13 @@ async fn handle_network(
&mut txn,
ExternalValidatorSet { network, session },
slash_report,
Signature::from(signature),
Signature(signature),
);
}
},
messages::ProcessorMessage::Substrate(msg) => match msg {
messages::substrate::ProcessorMessage::SubstrateBlockAck { block, plans } => {
let block = BlockHash(block);
let mut by_session = HashMap::new();
for plan in plans {
by_session
@@ -481,7 +487,7 @@ async fn main() {
);
// Handle each of the networks
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network));
}

View File

@@ -10,7 +10,10 @@ use tokio::sync::mpsc;
use serai_db::{DbTxn, Db as DbTrait};
use serai_client::validator_sets::primitives::{Session, ExternalValidatorSet};
use serai_client_serai::abi::primitives::{
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
};
use message_queue::{Service, Metadata, client::MessageQueue};
use tributary_sdk::Tributary;
@@ -39,7 +42,7 @@ impl<P: P2p> ContinuallyRan for SubstrateTask<P> {
let mut made_progress = false;
// Handle the Canonical events
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
loop {
let mut txn = self.db.txn();
let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network)

View File

@@ -11,8 +11,7 @@ use tokio::sync::mpsc;
use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel};
use scale::Encode;
use serai_client::validator_sets::primitives::ExternalValidatorSet;
use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary};
@@ -479,7 +478,8 @@ pub(crate) async fn spawn_tributary<P: P2p>(
return;
}
let genesis = <[u8; 32]>::from(Blake2s::<U32>::digest((set.serai_block, set.set).encode()));
let genesis =
<[u8; 32]>::from(Blake2s::<U32>::digest(borsh::to_vec(&(set.serai_block, set.set)).unwrap()));
// Since the Serai block will be finalized, then cosigned, before we handle this, this time will
// be a couple of minutes stale. While the Tributary will still function with a start time in the

View File

@@ -20,17 +20,15 @@ workspace = true
[dependencies]
bitvec = { version = "1", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] }
serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] }
futures = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -3,7 +3,13 @@ use std::sync::Arc;
use futures::stream::{StreamExt, FuturesOrdered};
use serai_client::{validator_sets::primitives::ExternalValidatorSet, Serai};
use serai_client_serai::{
abi::{
self,
primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet},
},
Serai,
};
use messages::substrate::{InInstructionResult, ExecutedBatch, CoordinatorMessage};
@@ -15,6 +21,7 @@ use serai_cosign::Cosigning;
create_db!(
CoordinatorSubstrateCanonical {
NextBlock: () -> u64,
LastIndexedBatchId: (network: ExternalNetworkId) -> u32,
}
);
@@ -45,10 +52,10 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
// These are all the events which generate canonical messages
struct CanonicalEvents {
time: u64,
key_gen_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>,
set_retired_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>,
batch_events: Vec<serai_client::in_instructions::InInstructionsEvent>,
burn_events: Vec<serai_client::coins::CoinsEvent>,
set_keys_events: Vec<abi::validator_sets::Event>,
slash_report_events: Vec<abi::validator_sets::Event>,
batch_events: Vec<abi::in_instructions::Event>,
burn_events: Vec<abi::coins::Event>,
}
// For a cosigned block, fetch all relevant events
@@ -66,40 +73,24 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
}
Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()),
};
let temporal_serai = serai.as_of(block_hash);
let temporal_serai_validators = temporal_serai.validator_sets();
let temporal_serai_instructions = temporal_serai.in_instructions();
let temporal_serai_coins = temporal_serai.coins();
let (block, key_gen_events, set_retired_events, batch_events, burn_events) =
tokio::try_join!(
serai.block(block_hash),
temporal_serai_validators.key_gen_events(),
temporal_serai_validators.set_retired_events(),
temporal_serai_instructions.batch_events(),
temporal_serai_coins.burn_with_instruction_events(),
)
.map_err(|e| format!("{e:?}"))?;
let Some(block) = block else {
let events = serai.events(block_hash).await.map_err(|e| format!("{e}"))?;
let set_keys_events = events.validator_sets().set_keys_events().cloned().collect();
let slash_report_events =
events.validator_sets().slash_report_events().cloned().collect();
let batch_events = events.in_instructions().batch_events().cloned().collect();
let burn_events = events.coins().burn_with_instruction_events().cloned().collect();
let Some(block) = serai.block(block_hash).await.map_err(|e| format!("{e:?}"))? else {
Err(format!("Serai node didn't have cosigned block #{block_number}"))?
};
let time = if block_number == 0 {
block.time().unwrap_or(0)
} else {
// Serai's block time is in milliseconds
block
.time()
.ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? /
1000
};
// We use time in seconds, not milliseconds, here
let time = block.header.unix_time_in_millis() / 1000;
Ok((
block_number,
CanonicalEvents {
time,
key_gen_events,
set_retired_events,
set_keys_events,
slash_report_events,
batch_events,
burn_events,
},
@@ -131,10 +122,9 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
let mut txn = self.db.txn();
for key_gen in block.key_gen_events {
let serai_client::validator_sets::ValidatorSetsEvent::KeyGen { set, key_pair } = &key_gen
else {
panic!("KeyGen event wasn't a KeyGen event: {key_gen:?}");
for set_keys in block.set_keys_events {
let abi::validator_sets::Event::SetKeys { set, key_pair } = &set_keys else {
panic!("`SetKeys` event wasn't a `SetKeys` event: {set_keys:?}");
};
crate::Canonical::send(
&mut txn,
@@ -147,12 +137,10 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
);
}
for set_retired in block.set_retired_events {
let serai_client::validator_sets::ValidatorSetsEvent::SetRetired { set } = &set_retired
else {
panic!("SetRetired event wasn't a SetRetired event: {set_retired:?}");
for slash_report in block.slash_report_events {
let abi::validator_sets::Event::SlashReport { set } = &slash_report else {
panic!("`SlashReport` event wasn't a `SlashReport` event: {slash_report:?}");
};
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
crate::Canonical::send(
&mut txn,
set.network,
@@ -160,10 +148,12 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
);
}
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
let mut batch = None;
for this_batch in &block.batch_events {
let serai_client::in_instructions::InInstructionsEvent::Batch {
// Only irrefutable as this is the only member of the enum at this time
#[expect(irrefutable_let_patterns)]
let abi::in_instructions::Event::Batch {
network: batch_network,
publishing_session,
id,
@@ -194,14 +184,19 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
})
.collect(),
});
if LastIndexedBatchId::get(&txn, network) != id.checked_sub(1) {
panic!(
"next batch from Serai's ID was not an increment of the last indexed batch's ID"
);
}
LastIndexedBatchId::set(&mut txn, network, id);
}
}
let mut burns = vec![];
for burn in &block.burn_events {
let serai_client::coins::CoinsEvent::BurnWithInstruction { from: _, instruction } =
&burn
else {
let abi::coins::Event::BurnWithInstruction { from: _, instruction } = &burn else {
panic!("BurnWithInstruction event wasn't a BurnWithInstruction event: {burn:?}");
};
if instruction.balance.coin.network() == network {
@@ -223,3 +218,7 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
}
}
}
pub(crate) fn last_indexed_batch_id(txn: &impl DbTxn, network: ExternalNetworkId) -> Option<u32> {
LastIndexedBatchId::get(txn, network)
}

View File

@@ -3,9 +3,14 @@ use std::sync::Arc;
use futures::stream::{StreamExt, FuturesOrdered};
use serai_client::{
primitives::{SeraiAddress, EmbeddedEllipticCurve},
validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet},
use serai_client_serai::{
abi::primitives::{
BlockHash,
crypto::EmbeddedEllipticCurveKeys as EmbeddedEllipticCurveKeysStruct,
network_id::ExternalNetworkId,
validator_sets::{KeyShares, ExternalValidatorSet},
address::SeraiAddress,
},
Serai,
};
@@ -19,6 +24,10 @@ use crate::NewSetInformation;
create_db!(
CoordinatorSubstrateEphemeral {
NextBlock: () -> u64,
EmbeddedEllipticCurveKeys: (
network: ExternalNetworkId,
validator: SeraiAddress
) -> EmbeddedEllipticCurveKeysStruct,
}
);
@@ -49,10 +58,11 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
// These are all the events which generate canonical messages
struct EphemeralEvents {
block_hash: [u8; 32],
block_hash: BlockHash,
time: u64,
new_set_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>,
accepted_handover_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>,
embedded_elliptic_curve_keys_events: Vec<serai_client_serai::abi::validator_sets::Event>,
set_decided_events: Vec<serai_client_serai::abi::validator_sets::Event>,
accepted_handover_events: Vec<serai_client_serai::abi::validator_sets::Event>,
}
// For a cosigned block, fetch all relevant events
@@ -71,31 +81,31 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()),
};
let temporal_serai = serai.as_of(block_hash);
let temporal_serai_validators = temporal_serai.validator_sets();
let (block, new_set_events, accepted_handover_events) = tokio::try_join!(
serai.block(block_hash),
temporal_serai_validators.new_set_events(),
temporal_serai_validators.accepted_handover_events(),
)
.map_err(|e| format!("{e:?}"))?;
let Some(block) = block else {
let events = serai.events(block_hash).await.map_err(|e| format!("{e}"))?;
let embedded_elliptic_curve_keys_events = events
.validator_sets()
.set_embedded_elliptic_curve_keys_events()
.cloned()
.collect::<Vec<_>>();
let set_decided_events =
events.validator_sets().set_decided_events().cloned().collect::<Vec<_>>();
let accepted_handover_events =
events.validator_sets().accepted_handover_events().cloned().collect::<Vec<_>>();
let Some(block) = serai.block(block_hash).await.map_err(|e| format!("{e:?}"))? else {
Err(format!("Serai node didn't have cosigned block #{block_number}"))?
};
let time = if block_number == 0 {
block.time().unwrap_or(0)
} else {
// Serai's block time is in milliseconds
block
.time()
.ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? /
1000
};
// We use time in seconds, not milliseconds, here
let time = block.header.unix_time_in_millis() / 1000;
Ok((
block_number,
EphemeralEvents { block_hash, time, new_set_events, accepted_handover_events },
EphemeralEvents {
block_hash,
time,
embedded_elliptic_curve_keys_events,
set_decided_events,
accepted_handover_events,
},
))
}
}
@@ -126,105 +136,82 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
let mut txn = self.db.txn();
for new_set in block.new_set_events {
let serai_client::validator_sets::ValidatorSetsEvent::NewSet { set } = &new_set else {
panic!("NewSet event wasn't a NewSet event: {new_set:?}");
for event in block.embedded_elliptic_curve_keys_events {
let serai_client_serai::abi::validator_sets::Event::SetEmbeddedEllipticCurveKeys {
validator,
keys,
} = &event
else {
panic!(
"{}: {event:?}",
"`SetEmbeddedEllipticCurveKeys` event wasn't a `SetEmbeddedEllipticCurveKeys` event"
);
};
EmbeddedEllipticCurveKeys::set(&mut txn, keys.network(), *validator, keys);
}
for set_decided in block.set_decided_events {
let serai_client_serai::abi::validator_sets::Event::SetDecided { set, validators } =
&set_decided
else {
panic!("`SetDecided` event wasn't a `SetDecided` event: {set_decided:?}");
};
// We only coordinate over external networks
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
let validators =
validators.iter().map(|(validator, weight)| (*validator, weight.0)).collect::<Vec<_>>();
let serai = self.serai.as_of(block.block_hash);
let serai = serai.validator_sets();
let Some(validators) =
serai.participants(set.network.into()).await.map_err(|e| format!("{e:?}"))?
else {
Err(format!(
"block #{block_number} declared a new set but didn't have the participants"
))?
};
let validators = validators
.into_iter()
.map(|(validator, weight)| (SeraiAddress::from(validator), weight))
.collect::<Vec<_>>();
let in_set = validators.iter().any(|(validator, _)| *validator == self.validator);
if in_set {
if u16::try_from(validators.len()).is_err() {
Err("more than u16::MAX validators sent")?;
}
let Ok(validators) = validators
.into_iter()
.map(|(validator, weight)| u16::try_from(weight).map(|weight| (validator, weight)))
.collect::<Result<Vec<_>, _>>()
else {
Err("validator's weight exceeded u16::MAX".to_string())?
};
// Do the summation in u32 so we don't risk a u16 overflow
let total_weight = validators.iter().map(|(_, weight)| u32::from(*weight)).sum::<u32>();
if total_weight > u32::from(MAX_KEY_SHARES_PER_SET) {
if total_weight > u32::from(KeyShares::MAX_PER_SET) {
Err(format!(
"{set:?} has {total_weight} key shares when the max is {MAX_KEY_SHARES_PER_SET}"
"{set:?} has {total_weight} key shares when the max is {}",
KeyShares::MAX_PER_SET
))?;
}
let total_weight = u16::try_from(total_weight).unwrap();
let total_weight = u16::try_from(total_weight)
.expect("value smaller than `u16` constant but doesn't fit in `u16`");
// Fetch all of the validators' embedded elliptic curve keys
let mut embedded_elliptic_curve_keys = FuturesOrdered::new();
for (validator, _) in &validators {
let validator = *validator;
// try_join doesn't return a future so we need to wrap it in this additional async
// block
embedded_elliptic_curve_keys.push_back(async move {
tokio::try_join!(
// One future to fetch the substrate embedded key
serai.embedded_elliptic_curve_key(
validator.into(),
EmbeddedEllipticCurve::Embedwards25519
),
// One future to fetch the external embedded key, if there is a distinct curve
async {
// `embedded_elliptic_curves` is documented to have the second entry be the
// network-specific curve (if it exists and is distinct from Embedwards25519)
if let Some(curve) = set.network.embedded_elliptic_curves().get(1) {
serai.embedded_elliptic_curve_key(validator.into(), *curve).await.map(Some)
} else {
Ok(None)
}
}
)
.map(|(substrate_embedded_key, external_embedded_key)| {
(validator, substrate_embedded_key, external_embedded_key)
})
});
}
let mut evrf_public_keys = Vec::with_capacity(usize::from(total_weight));
for (validator, weight) in &validators {
let (future_validator, substrate_embedded_key, external_embedded_key) =
embedded_elliptic_curve_keys.next().await.unwrap().map_err(|e| format!("{e:?}"))?;
assert_eq!(*validator, future_validator);
let external_embedded_key =
external_embedded_key.unwrap_or(substrate_embedded_key.clone());
match (substrate_embedded_key, external_embedded_key) {
(Some(substrate_embedded_key), Some(external_embedded_key)) => {
let substrate_embedded_key = <[u8; 32]>::try_from(substrate_embedded_key)
.map_err(|_| "Embedwards25519 key wasn't 32 bytes".to_string())?;
for _ in 0 .. *weight {
evrf_public_keys.push((substrate_embedded_key, external_embedded_key.clone()));
}
let keys = match EmbeddedEllipticCurveKeys::get(&txn, set.network, *validator)
.expect("selected validator lacked embedded elliptic curve keys")
{
EmbeddedEllipticCurveKeysStruct::Bitcoin(substrate, external) => {
assert_eq!(set.network, ExternalNetworkId::Bitcoin);
(substrate, external.to_vec())
}
_ => Err("NewSet with validator missing an embedded key".to_string())?,
EmbeddedEllipticCurveKeysStruct::Ethereum(substrate, external) => {
assert_eq!(set.network, ExternalNetworkId::Ethereum);
(substrate, external.to_vec())
}
EmbeddedEllipticCurveKeysStruct::Monero(substrate) => {
assert_eq!(set.network, ExternalNetworkId::Monero);
(substrate, substrate.to_vec())
}
};
for _ in 0 .. *weight {
evrf_public_keys.push(keys.clone());
}
}
let mut new_set = NewSetInformation {
set,
serai_block: block.block_hash,
serai_block: block.block_hash.0,
declaration_time: block.time,
// TODO: This should be inlined into the Processor's key gen code
// It's legacy from when we removed participants from the key gen
threshold: ((total_weight * 2) / 3) + 1,
// TODO: Why are `validators` and `evrf_public_keys` two separate fields?
validators,
evrf_public_keys,
participant_indexes: Default::default(),
@@ -238,7 +225,7 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
}
for accepted_handover in block.accepted_handover_events {
let serai_client::validator_sets::ValidatorSetsEvent::AcceptedHandover { set } =
let serai_client_serai::abi::validator_sets::Event::AcceptedHandover { set } =
&accepted_handover
else {
panic!("AcceptedHandover event wasn't a AcceptedHandover event: {accepted_handover:?}");

View File

@@ -4,15 +4,18 @@
use std::collections::HashMap;
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize};
use dkg::Participant;
use serai_client::{
primitives::{ExternalNetworkId, SeraiAddress, Signature},
validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair, SlashReport},
in_instructions::primitives::SignedBatch,
use serai_client_serai::abi::{
primitives::{
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet, SlashReport},
crypto::{Signature, KeyPair},
address::SeraiAddress,
instructions::SignedBatch,
},
Transaction,
};
@@ -20,6 +23,7 @@ use serai_db::*;
mod canonical;
pub use canonical::CanonicalEventStream;
use canonical::last_indexed_batch_id;
mod ephemeral;
pub use ephemeral::EphemeralEventStream;
@@ -38,7 +42,7 @@ pub struct NewSetInformation {
pub set: ExternalValidatorSet,
/// The Serai block which declared it.
pub serai_block: [u8; 32],
/// The time of the block which declared it, in seconds.
/// The time of the block which declared it, in seconds since the epoch.
pub declaration_time: u64,
/// The threshold to use.
pub threshold: u16,
@@ -97,9 +101,9 @@ mod _public_db {
create_db!(
CoordinatorSubstrate {
// Keys to set on the Serai network
Keys: (network: ExternalNetworkId) -> (Session, Vec<u8>),
Keys: (network: ExternalNetworkId) -> (Session, Transaction),
// Slash reports to publish onto the Serai network
SlashReports: (network: ExternalNetworkId) -> (Session, Vec<u8>),
SlashReports: (network: ExternalNetworkId) -> (Session, Transaction),
}
);
}
@@ -172,20 +176,19 @@ impl Keys {
}
}
let tx = serai_client::validator_sets::SeraiValidatorSets::set_keys(
let tx = serai_client_serai::ValidatorSets::set_keys(
set.network,
key_pair,
signature_participants,
signature,
);
_public_db::Keys::set(txn, set.network, &(set.session, tx.encode()));
_public_db::Keys::set(txn, set.network, &(set.session, tx));
}
pub(crate) fn take(
txn: &mut impl DbTxn,
network: ExternalNetworkId,
) -> Option<(Session, Transaction)> {
let (session, tx) = _public_db::Keys::take(txn, network)?;
Some((session, <_>::decode(&mut tx.as_slice()).unwrap()))
_public_db::Keys::take(txn, network)
}
}
@@ -194,7 +197,7 @@ pub struct SignedBatches;
impl SignedBatches {
/// Send a `SignedBatch` to publish onto Serai.
pub fn send(txn: &mut impl DbTxn, batch: &SignedBatch) {
_public_db::SignedBatches::send(txn, batch.batch.network, batch);
_public_db::SignedBatches::send(txn, batch.batch.network(), batch);
}
pub(crate) fn try_recv(txn: &mut impl DbTxn, network: ExternalNetworkId) -> Option<SignedBatch> {
_public_db::SignedBatches::try_recv(txn, network)
@@ -221,18 +224,14 @@ impl SlashReports {
}
}
let tx = serai_client::validator_sets::SeraiValidatorSets::report_slashes(
set.network,
slash_report,
signature,
);
_public_db::SlashReports::set(txn, set.network, &(set.session, tx.encode()));
let tx =
serai_client_serai::ValidatorSets::report_slashes(set.network, slash_report, signature);
_public_db::SlashReports::set(txn, set.network, &(set.session, tx));
}
pub(crate) fn take(
txn: &mut impl DbTxn,
network: ExternalNetworkId,
) -> Option<(Session, Transaction)> {
let (session, tx) = _public_db::SlashReports::take(txn, network)?;
Some((session, <_>::decode(&mut tx.as_slice()).unwrap()))
_public_db::SlashReports::take(txn, network)
}
}

View File

@@ -1,8 +1,10 @@
use core::future::Future;
use std::sync::Arc;
#[rustfmt::skip]
use serai_client::{primitives::ExternalNetworkId, in_instructions::primitives::SignedBatch, SeraiError, Serai};
use serai_client_serai::{
abi::primitives::{network_id::ExternalNetworkId, instructions::SignedBatch},
RpcError, Serai,
};
use serai_db::{Get, DbTxn, Db, create_db};
use serai_task::ContinuallyRan;
@@ -31,7 +33,7 @@ impl<D: Db> PublishBatchTask<D> {
}
impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
type Error = SeraiError;
type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
@@ -43,8 +45,8 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
};
// If this is a Batch not yet published, save it into our unordered mapping
if LastPublishedBatch::get(&txn, self.network) < Some(batch.batch.id) {
BatchesToPublish::set(&mut txn, self.network, batch.batch.id, &batch);
if LastPublishedBatch::get(&txn, self.network) < Some(batch.batch.id()) {
BatchesToPublish::set(&mut txn, self.network, batch.batch.id(), &batch);
}
txn.commit();
@@ -52,12 +54,8 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
// Synchronize our last published batch with the Serai network's
let next_to_publish = {
// This uses the latest finalized block, not the latest cosigned block, which should be
// fine as in the worst case, the only impact is no longer attempting TX publication
let serai = self.serai.as_of_latest_finalized_block().await?;
let last_batch = serai.in_instructions().last_batch_for_network(self.network).await?;
let mut txn = self.db.txn();
let last_batch = crate::last_indexed_batch_id(&txn, self.network);
let mut our_last_batch = LastPublishedBatch::get(&txn, self.network);
while our_last_batch < last_batch {
let next_batch = our_last_batch.map(|batch| batch + 1).unwrap_or(0);
@@ -68,6 +66,7 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
if let Some(last_batch) = our_last_batch {
LastPublishedBatch::set(&mut txn, self.network, &last_batch);
}
txn.commit();
last_batch.map(|batch| batch + 1).unwrap_or(0)
};
@@ -75,7 +74,7 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
if let Some(batch) = BatchesToPublish::get(&self.db, self.network, next_to_publish) {
self
.serai
.publish(&serai_client::in_instructions::SeraiInInstructions::execute_batch(batch))
.publish_transaction(&serai_client_serai::InInstructions::execute_batch(batch))
.await?;
true
} else {

View File

@@ -3,7 +3,10 @@ use std::sync::Arc;
use serai_db::{DbTxn, Db};
use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::Session, Serai};
use serai_client_serai::{
abi::primitives::{network_id::ExternalNetworkId, validator_sets::Session},
Serai,
};
use serai_task::ContinuallyRan;
@@ -33,10 +36,10 @@ impl<D: Db> PublishSlashReportTask<D> {
// This uses the latest finalized block, not the latest cosigned block, which should be
// fine as in the worst case, the only impact is no longer attempting TX publication
let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?;
let serai = serai.validator_sets();
let serai = self.serai.state().await.map_err(|e| format!("{e:?}"))?;
let session_after_slash_report = Session(session.0 + 1);
let current_session = serai.session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let current_session =
serai.current_session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let current_session = current_session.map(|session| session.0);
// Only attempt to publish the slash report for session #n while session #n+1 is still
// active
@@ -55,14 +58,13 @@ impl<D: Db> PublishSlashReportTask<D> {
}
// If this session which should publish a slash report already has, move on
let key_pending_slash_report =
serai.key_pending_slash_report(network).await.map_err(|e| format!("{e:?}"))?;
if key_pending_slash_report.is_none() {
if !serai.pending_slash_report(network).await.map_err(|e| format!("{e:?}"))? {
txn.commit();
return Ok(false);
};
match self.serai.publish(&slash_report).await {
// Since this slash report is still pending, publish it
match self.serai.publish_transaction(&slash_report).await {
Ok(()) => {
txn.commit();
Ok(true)
@@ -84,7 +86,7 @@ impl<D: Db> ContinuallyRan for PublishSlashReportTask<D> {
async move {
let mut made_progress = false;
let mut error = None;
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
let network_res = self.publish(network).await;
// We made progress if any network successfully published their slash report
made_progress |= network_res == Ok(true);

View File

@@ -3,7 +3,10 @@ use std::sync::Arc;
use serai_db::{DbTxn, Db};
use serai_client::{validator_sets::primitives::ExternalValidatorSet, Serai};
use serai_client_serai::{
abi::primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet},
Serai,
};
use serai_task::ContinuallyRan;
@@ -28,7 +31,7 @@ impl<D: Db> ContinuallyRan for SetKeysTask<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move {
let mut made_progress = false;
for network in serai_client::primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
let mut txn = self.db.txn();
let Some((session, keys)) = Keys::take(&mut txn, network) else {
// No keys to set
@@ -37,10 +40,9 @@ impl<D: Db> ContinuallyRan for SetKeysTask<D> {
// This uses the latest finalized block, not the latest cosigned block, which should be
// fine as in the worst case, the only impact is no longer attempting TX publication
let serai =
self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?;
let serai = serai.validator_sets();
let current_session = serai.session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let serai = self.serai.state().await.map_err(|e| format!("{e:?}"))?;
let current_session =
serai.current_session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let current_session = current_session.map(|session| session.0);
// Only attempt to set these keys if this isn't a retired session
if Some(session.0) < current_session {
@@ -67,7 +69,7 @@ impl<D: Db> ContinuallyRan for SetKeysTask<D> {
continue;
};
match self.serai.publish(&keys).await {
match self.serai.publish_transaction(&keys).await {
Ok(()) => {
txn.commit();
made_progress = true;

View File

@@ -36,7 +36,7 @@ log = { version = "0.4", default-features = false, features = ["std"] }
serai-db = { path = "../../common/db", version = "0.1" }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }
tendermint = { package = "tendermint-machine", path = "./tendermint", version = "0.2" }

View File

@@ -5,7 +5,7 @@ use ciphersuite::{group::GroupEncoding, *};
use serai_db::{Get, DbTxn, Db};
use scale::Decode;
use borsh::BorshDeserialize;
use tendermint::ext::{Network, Commit};
@@ -62,7 +62,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
D::key(
b"tributary_blockchain",
b"next_nonce",
[genesis.as_ref(), signer.to_bytes().as_ref(), order].concat(),
[genesis.as_slice(), signer.to_bytes().as_slice(), order].concat(),
)
}
@@ -109,7 +109,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn block_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Block<T>> {
db.get(Self::block_key(&genesis, block))
.map(|bytes| Block::<T>::read::<&[u8]>(&mut bytes.as_ref()).unwrap())
.map(|bytes| Block::<T>::read::<&[u8]>(&mut bytes.as_slice()).unwrap())
}
pub(crate) fn commit_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Vec<u8>> {
@@ -169,7 +169,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
// we must have a commit per valid hash
let commit = Self::commit_from_db(db, genesis, &hash).unwrap();
// commit has to be valid if it is coming from our db
Some(Commit::<N::SignatureScheme>::decode(&mut commit.as_ref()).unwrap())
Some(Commit::<N::SignatureScheme>::deserialize_reader(&mut commit.as_slice()).unwrap())
};
let unsigned_in_chain =
|hash: [u8; 32]| db.get(Self::unsigned_included_key(&self.genesis, &hash)).is_some();
@@ -244,7 +244,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
let commit = |block: u64| -> Option<Commit<N::SignatureScheme>> {
let commit = self.commit_by_block_number(block)?;
// commit has to be valid if it is coming from our db
Some(Commit::<N::SignatureScheme>::decode(&mut commit.as_ref()).unwrap())
Some(Commit::<N::SignatureScheme>::deserialize_reader(&mut commit.as_slice()).unwrap())
};
let mut txn_db = db.clone();

View File

@@ -3,10 +3,11 @@ use std::{sync::Arc, io};
use zeroize::Zeroizing;
use borsh::BorshDeserialize;
use ciphersuite::*;
use dalek_ff_group::Ristretto;
use scale::Decode;
use futures_channel::mpsc::UnboundedReceiver;
use futures_util::{StreamExt, SinkExt};
use ::tendermint::{
@@ -177,7 +178,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
let block_number = BlockNumber(blockchain.block_number());
let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) {
Commit::<Validators>::decode(&mut commit.as_ref()).unwrap().end_time
Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap().end_time
} else {
start_time
};
@@ -276,8 +277,8 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
}
let block = TendermintBlock(block.serialize());
let mut commit_ref = commit.as_ref();
let Ok(commit) = Commit::<Arc<Validators>>::decode(&mut commit_ref) else {
let mut commit_ref = commit.as_slice();
let Ok(commit) = Commit::<Arc<Validators>>::deserialize_reader(&mut commit_ref) else {
log::error!("sent an invalidly serialized commit");
return false;
};
@@ -327,7 +328,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
Some(&TENDERMINT_MESSAGE) => {
let Ok(msg) =
SignedMessageFor::<TendermintNetwork<D, T, P>>::decode::<&[u8]>(&mut &msg[1 ..])
SignedMessageFor::<TendermintNetwork<D, T, P>>::deserialize_reader(&mut &msg[1 ..])
else {
log::error!("received invalid tendermint message");
return false;
@@ -367,15 +368,17 @@ impl<D: Db, T: TransactionTrait> TributaryReader<D, T> {
Blockchain::<D, T>::commit_from_db(&self.0, self.1, hash)
}
pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option<Commit<Validators>> {
self.commit(hash).map(|commit| Commit::<Validators>::decode(&mut commit.as_ref()).unwrap())
self
.commit(hash)
.map(|commit| Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap())
}
pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> {
Blockchain::<D, T>::block_after(&self.0, self.1, hash)
}
pub fn time_of_block(&self, hash: &[u8; 32]) -> Option<u64> {
self
.commit(hash)
.map(|commit| Commit::<Validators>::decode(&mut commit.as_ref()).unwrap().end_time)
self.commit(hash).map(|commit| {
Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap().end_time
})
}
pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool {

View File

@@ -21,7 +21,7 @@ use schnorr::{
use serai_db::Db;
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize};
use tendermint::{
SignedMessageFor,
ext::{
@@ -248,7 +248,7 @@ impl Weights for Validators {
}
}
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct TendermintBlock(pub Vec<u8>);
impl BlockTrait for TendermintBlock {
type Id = [u8; 32];
@@ -300,7 +300,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Network for TendermintNetwork<D, T, P>
fn broadcast(&mut self, msg: SignedMessageFor<Self>) -> impl Send + Future<Output = ()> {
async move {
let mut to_broadcast = vec![TENDERMINT_MESSAGE];
to_broadcast.extend(msg.encode());
msg.serialize(&mut to_broadcast).unwrap();
self.p2p.broadcast(self.genesis, to_broadcast).await
}
}
@@ -390,7 +390,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Network for TendermintNetwork<D, T, P>
return invalid_block();
};
let encoded_commit = commit.encode();
let encoded_commit = borsh::to_vec(&commit).unwrap();
loop {
let block_res = self.blockchain.write().await.add_block::<Self>(
&block,

View File

@@ -1,6 +1,6 @@
use std::io;
use scale::{Encode, Decode, IoReader};
use borsh::BorshDeserialize;
use blake2::{Digest, Blake2s256};
@@ -27,14 +27,14 @@ pub enum TendermintTx {
impl ReadWrite for TendermintTx {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Evidence::decode(&mut IoReader(reader))
Evidence::deserialize_reader(reader)
.map(TendermintTx::SlashEvidence)
.map_err(|_| io::Error::new(io::ErrorKind::InvalidData, "invalid evidence format"))
}
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self {
TendermintTx::SlashEvidence(ev) => writer.write_all(&ev.encode()),
TendermintTx::SlashEvidence(ev) => writer.write_all(&borsh::to_vec(&ev).unwrap()),
}
}
}

View File

@@ -10,8 +10,6 @@ use dalek_ff_group::Ristretto;
use ciphersuite::*;
use schnorr::SchnorrSignature;
use scale::Encode;
use ::tendermint::{
ext::{Network, Signer as SignerTrait, SignatureScheme, BlockNumber, RoundNumber},
SignedMessageFor, DataFor, Message, SignedMessage, Data, Evidence,
@@ -200,7 +198,7 @@ pub async fn signed_from_data<N: Network>(
round: RoundNumber(round_number),
data,
};
let sig = signer.sign(&msg.encode()).await;
let sig = signer.sign(&borsh::to_vec(&msg).unwrap()).await;
SignedMessage { msg, sig }
}
@@ -213,5 +211,5 @@ pub async fn random_evidence_tx<N: Network>(
let data = Data::Proposal(Some(RoundNumber(0)), b);
let signer_id = signer.validator_id().await.unwrap();
let signed = signed_from_data::<N>(signer, signer_id, 0, 0, data).await;
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode()))
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap()))
}

View File

@@ -6,8 +6,6 @@ use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::*;
use scale::Encode;
use tendermint::{
time::CanonicalInstant,
round::RoundData,
@@ -52,7 +50,10 @@ async fn invalid_valid_round() {
async move {
let data = Data::Proposal(valid_round, TendermintBlock(vec![]));
let signed = signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, data).await;
(signed.clone(), TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode())))
(
signed.clone(),
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap())),
)
}
};
@@ -70,7 +71,8 @@ async fn invalid_valid_round() {
let mut random_sig = [0u8; 64];
OsRng.fill_bytes(&mut random_sig);
signed.sig = random_sig;
let tx = TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode()));
let tx =
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap()));
// should fail
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
@@ -90,7 +92,10 @@ async fn invalid_precommit_signature() {
let signed =
signed_from_data::<N>(signer.clone().into(), signer_id, 1, 0, Data::Precommit(precommit))
.await;
(signed.clone(), TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(signed.encode())))
(
signed.clone(),
TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap())),
)
}
};
@@ -120,7 +125,8 @@ async fn invalid_precommit_signature() {
let mut random_sig = [0u8; 64];
OsRng.fill_bytes(&mut random_sig);
signed.sig = random_sig;
let tx = TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(signed.encode()));
let tx =
TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap()));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
}
}
@@ -138,24 +144,32 @@ async fn evidence_with_prevote() {
// it should fail for all reasons.
let mut txs = vec![];
txs.push(TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await
.encode(),
borsh::to_vec(
&&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await,
)
.unwrap(),
)));
txs.push(TendermintTx::SlashEvidence(Evidence::InvalidValidRound(
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await
.encode(),
borsh::to_vec(
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await,
)
.unwrap(),
)));
// Since these require a second message, provide this one again
// ConflictingMessages can be fired for actually conflicting Prevotes however
txs.push(TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await
.encode(),
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await
.encode(),
borsh::to_vec(
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await,
)
.unwrap(),
borsh::to_vec(
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await,
)
.unwrap(),
)));
txs
}
@@ -189,16 +203,16 @@ async fn conflicting_msgs_evidence_tx() {
// non-conflicting data should fail
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_1.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_1).unwrap(),
));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
// conflicting data should pass
let signed_2 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap();
@@ -206,16 +220,16 @@ async fn conflicting_msgs_evidence_tx() {
// (except for Precommit)
let signed_2 = signed_for_b_r(0, 1, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
// Proposals for different block numbers should also fail as evidence
let signed_2 = signed_for_b_r(1, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
}
@@ -225,16 +239,16 @@ async fn conflicting_msgs_evidence_tx() {
// non-conflicting data should fail
let signed_1 = signed_for_b_r(0, 0, Data::Prevote(Some([0x11; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_1.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_1).unwrap(),
));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
// conflicting data should pass
let signed_2 = signed_for_b_r(0, 0, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap();
@@ -242,16 +256,16 @@ async fn conflicting_msgs_evidence_tx() {
// (except for Precommit)
let signed_2 = signed_for_b_r(0, 1, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
// Proposals for different block numbers should also fail as evidence
let signed_2 = signed_for_b_r(1, 0, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
}
@@ -273,8 +287,8 @@ async fn conflicting_msgs_evidence_tx() {
.await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
// update schema so that we don't fail due to invalid signature
@@ -292,8 +306,8 @@ async fn conflicting_msgs_evidence_tx() {
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![]))).await;
let signed_2 = signed_for_b_r(0, 0, Data::Prevote(None)).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(),
signed_2.encode(),
borsh::to_vec(&signed_1).unwrap(),
borsh::to_vec(&signed_2).unwrap(),
));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
}

View File

@@ -6,7 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.75"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true
@@ -21,7 +21,7 @@ thiserror = { version = "2", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
log = { version = "0.4", default-features = false, features = ["std"] }
parity-scale-codec = { version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
futures-util = { version = "0.3", default-features = false, features = ["std", "async-await-macro", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }

View File

@@ -3,33 +3,41 @@ use std::{sync::Arc, collections::HashSet};
use thiserror::Error;
use parity_scale_codec::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize};
use crate::{SignedMessageFor, SlashEvent, commit_msg};
/// An alias for a series of traits required for a type to be usable as a validator ID,
/// automatically implemented for all types satisfying those traits.
pub trait ValidatorId:
Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode
Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + BorshSerialize + BorshDeserialize
{
}
impl<V: Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode> ValidatorId
for V
#[rustfmt::skip]
impl<
V: Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + BorshSerialize + BorshDeserialize,
> ValidatorId for V
{
}
/// An alias for a series of traits required for a type to be usable as a signature,
/// automatically implemented for all types satisfying those traits.
pub trait Signature: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode {}
impl<S: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode> Signature for S {}
pub trait Signature:
Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize
{
}
impl<S: Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize> Signature
for S
{
}
// Type aliases which are distinct according to the type system
/// A struct containing a Block Number, wrapped to have a distinct type.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)]
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub struct BlockNumber(pub u64);
/// A struct containing a round number, wrapped to have a distinct type.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)]
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub struct RoundNumber(pub u32);
/// A signer for a validator.
@@ -127,7 +135,7 @@ impl<S: SignatureScheme> SignatureScheme for Arc<S> {
/// A commit for a specific block.
///
/// The list of validators have weight exceeding the threshold for a valid commit.
#[derive(PartialEq, Debug, Encode, Decode)]
#[derive(PartialEq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Commit<S: SignatureScheme> {
/// End time of the round which created this commit, used as the start time of the next block.
pub end_time: u64,
@@ -185,7 +193,7 @@ impl<W: Weights> Weights for Arc<W> {
}
/// Simplified error enum representing a block's validity.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error, Encode, Decode)]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error, BorshSerialize, BorshDeserialize)]
pub enum BlockError {
/// Malformed block which is wholly invalid.
#[error("invalid block")]
@@ -197,9 +205,20 @@ pub enum BlockError {
}
/// Trait representing a Block.
pub trait Block: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode {
pub trait Block:
Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize
{
// Type used to identify blocks. Presumably a cryptographic hash of the block.
type Id: Send + Sync + Copy + Clone + PartialEq + Eq + AsRef<[u8]> + Debug + Encode + Decode;
type Id: Send
+ Sync
+ Copy
+ Clone
+ PartialEq
+ Eq
+ AsRef<[u8]>
+ Debug
+ BorshSerialize
+ BorshDeserialize;
/// Return the deterministic, unique ID for this block.
fn id(&self) -> Self::Id;

View File

@@ -1,5 +1,3 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug;
use std::{
@@ -8,7 +6,7 @@ use std::{
collections::{VecDeque, HashMap},
};
use parity_scale_codec::{Encode, Decode, IoReader};
use borsh::{BorshSerialize, BorshDeserialize};
use futures_channel::mpsc;
use futures_util::{
@@ -43,14 +41,14 @@ pub fn commit_msg(end_time: u64, id: &[u8]) -> Vec<u8> {
[&end_time.to_le_bytes(), id].concat()
}
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)]
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub enum Step {
Propose,
Prevote,
Precommit,
}
#[derive(Clone, Eq, Debug, Encode, Decode)]
#[derive(Clone, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Data<B: Block, S: Signature> {
Proposal(Option<RoundNumber>, B),
Prevote(Option<B::Id>),
@@ -92,7 +90,7 @@ impl<B: Block, S: Signature> Data<B, S> {
}
}
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Message<V: ValidatorId, B: Block, S: Signature> {
pub sender: V,
pub block: BlockNumber,
@@ -102,7 +100,7 @@ pub struct Message<V: ValidatorId, B: Block, S: Signature> {
}
/// A signed Tendermint consensus message to be broadcast to the other validators.
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedMessage<V: ValidatorId, B: Block, S: Signature> {
pub msg: Message<V, B, S>,
pub sig: S,
@@ -119,18 +117,18 @@ impl<V: ValidatorId, B: Block, S: Signature> SignedMessage<V, B, S> {
&self,
signer: &Scheme,
) -> bool {
signer.verify(self.msg.sender, &self.msg.encode(), &self.sig)
signer.verify(self.msg.sender, &borsh::to_vec(&self.msg).unwrap(), &self.sig)
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode)]
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum SlashReason {
FailToPropose,
InvalidBlock,
InvalidProposer,
}
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Evidence {
ConflictingMessages(Vec<u8>, Vec<u8>),
InvalidPrecommit(Vec<u8>),
@@ -161,7 +159,7 @@ pub type SignedMessageFor<N> = SignedMessage<
>;
pub fn decode_signed_message<N: Network>(mut data: &[u8]) -> Option<SignedMessageFor<N>> {
SignedMessageFor::<N>::decode(&mut data).ok()
SignedMessageFor::<N>::deserialize_reader(&mut data).ok()
}
fn decode_and_verify_signed_message<N: Network>(
@@ -341,7 +339,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
target: "tendermint",
"proposer for block {}, round {round:?} was {} (me: {res})",
self.block.number.0,
hex::encode(proposer.encode()),
hex::encode(borsh::to_vec(&proposer).unwrap()),
);
res
}
@@ -422,7 +420,11 @@ impl<N: Network + 'static> TendermintMachine<N> {
// TODO: If the new slash event has evidence, emit to prevent a low-importance slash from
// cancelling emission of high-importance slashes
if !self.block.slashes.contains(&validator) {
log::info!(target: "tendermint", "Slashing validator {}", hex::encode(validator.encode()));
log::info!(
target: "tendermint",
"Slashing validator {}",
hex::encode(borsh::to_vec(&validator).unwrap()),
);
self.block.slashes.insert(validator);
self.network.slash(validator, slash_event).await;
}
@@ -672,7 +674,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
self
.slash(
msg.sender,
SlashEvent::WithEvidence(Evidence::InvalidPrecommit(signed.encode())),
SlashEvent::WithEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap())),
)
.await;
Err(TendermintError::Malicious)?;
@@ -743,7 +745,10 @@ impl<N: Network + 'static> TendermintMachine<N> {
self.broadcast(Data::Prevote(None));
}
self
.slash(msg.sender, SlashEvent::WithEvidence(Evidence::InvalidValidRound(msg.encode())))
.slash(
msg.sender,
SlashEvent::WithEvidence(Evidence::InvalidValidRound(borsh::to_vec(&msg).unwrap())),
)
.await;
Err(TendermintError::Malicious)?;
}
@@ -1034,7 +1039,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
while !messages.is_empty() {
self.network.broadcast(
SignedMessageFor::<N>::decode(&mut IoReader(&mut messages))
SignedMessageFor::<N>::deserialize_reader(&mut messages)
.expect("saved invalid message to DB")
).await;
}
@@ -1059,7 +1064,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
} {
if our_message {
assert!(sig.is_none());
sig = Some(self.signer.sign(&msg.encode()).await);
sig = Some(self.signer.sign(&borsh::to_vec(&msg).unwrap()).await);
}
let sig = sig.unwrap();
@@ -1079,7 +1084,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
let message_tape_key = message_tape_key(self.genesis);
let mut txn = self.db.txn();
let mut message_tape = txn.get(&message_tape_key).unwrap_or(vec![]);
message_tape.extend(signed_msg.encode());
signed_msg.serialize(&mut message_tape).unwrap();
txn.put(&message_tape_key, message_tape);
txn.commit();
}

View File

@@ -1,7 +1,5 @@
use std::{sync::Arc, collections::HashMap};
use parity_scale_codec::Encode;
use crate::{ext::*, RoundNumber, Step, DataFor, SignedMessageFor, Evidence};
type RoundLog<N> = HashMap<<N as Network>::ValidatorId, HashMap<Step, SignedMessageFor<N>>>;
@@ -39,7 +37,10 @@ impl<N: Network> MessageLog<N> {
target: "tendermint",
"Validator sent multiple messages for the same block + round + step"
);
Err(Evidence::ConflictingMessages(existing.encode(), signed.encode()))?;
Err(Evidence::ConflictingMessages(
borsh::to_vec(&existing).unwrap(),
borsh::to_vec(&signed).unwrap(),
))?;
}
return Ok(false);
}

View File

@@ -4,7 +4,7 @@ use std::{
time::{UNIX_EPOCH, SystemTime, Duration},
};
use parity_scale_codec::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize};
use futures_util::sink::SinkExt;
use tokio::{sync::RwLock, time::sleep};
@@ -89,7 +89,7 @@ impl Weights for TestWeights {
}
}
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
struct TestBlock {
id: TestBlockId,
valid: Result<(), BlockError>,

View File

@@ -21,7 +21,6 @@ workspace = true
zeroize = { version = "^1.5", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
@@ -30,14 +29,14 @@ dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = fals
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] }
serai-db = { path = "../../common/db" }
serai-task = { path = "../../common/task", version = "0.1" }
tributary-sdk = { path = "../tributary-sdk" }
serai-cosign = { path = "../cosign" }
serai-cosign-types = { path = "../cosign/types" }
serai-coordinator-substrate = { path = "../substrate" }
messages = { package = "serai-processor-messages", path = "../../processor/messages" }

View File

@@ -1,22 +1,19 @@
#![expect(clippy::cast_possible_truncation)]
use std::collections::HashMap;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ExternalValidatorSet};
use serai_primitives::{BlockHash, validator_sets::ExternalValidatorSet, address::SeraiAddress};
use messages::sign::{VariantSignId, SignId};
use serai_db::*;
use serai_cosign::CosignIntent;
use serai_cosign_types::CosignIntent;
use crate::transaction::SigningProtocolRound;
/// A topic within the database which the group participates in
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Topic {
/// Vote to remove a participant
RemoveParticipant {
@@ -125,7 +122,7 @@ impl Topic {
Topic::DkgConfirmation { attempt, round: _ } => Some({
let id = {
let mut id = [0; 32];
let encoded_set = set.encode();
let encoded_set = borsh::to_vec(&set).unwrap();
id[.. encoded_set.len()].copy_from_slice(&encoded_set);
VariantSignId::Batch(id)
};
@@ -235,18 +232,18 @@ create_db!(
SlashPoints: (set: ExternalValidatorSet, validator: SeraiAddress) -> u32,
// The cosign intent for a Substrate block
CosignIntents: (set: ExternalValidatorSet, substrate_block_hash: [u8; 32]) -> CosignIntent,
CosignIntents: (set: ExternalValidatorSet, substrate_block_hash: BlockHash) -> CosignIntent,
// The latest Substrate block to cosign.
LatestSubstrateBlockToCosign: (set: ExternalValidatorSet) -> [u8; 32],
LatestSubstrateBlockToCosign: (set: ExternalValidatorSet) -> BlockHash,
// The hash of the block we're actively cosigning.
ActivelyCosigning: (set: ExternalValidatorSet) -> [u8; 32],
ActivelyCosigning: (set: ExternalValidatorSet) -> BlockHash,
// If this block has already been cosigned.
Cosigned: (set: ExternalValidatorSet, substrate_block_hash: [u8; 32]) -> (),
Cosigned: (set: ExternalValidatorSet, substrate_block_hash: BlockHash) -> (),
// The plans to recognize upon a `Transaction::SubstrateBlock` being included on-chain.
SubstrateBlockPlans: (
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32]
substrate_block_hash: BlockHash
) -> Vec<[u8; 32]>,
// The weight accumulated for a topic.
@@ -294,26 +291,26 @@ impl TributaryDb {
pub(crate) fn latest_substrate_block_to_cosign(
getter: &impl Get,
set: ExternalValidatorSet,
) -> Option<[u8; 32]> {
) -> Option<BlockHash> {
LatestSubstrateBlockToCosign::get(getter, set)
}
pub(crate) fn set_latest_substrate_block_to_cosign(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
) {
LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash);
}
pub(crate) fn actively_cosigning(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
) -> Option<[u8; 32]> {
) -> Option<BlockHash> {
ActivelyCosigning::get(txn, set)
}
pub(crate) fn start_cosigning(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
substrate_block_number: u64,
) {
assert!(
@@ -338,14 +335,14 @@ impl TributaryDb {
pub(crate) fn mark_cosigned(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
) {
Cosigned::set(txn, set, substrate_block_hash, &());
}
pub(crate) fn cosigned(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
) -> bool {
Cosigned::get(txn, set, substrate_block_hash).is_some()
}

View File

@@ -8,9 +8,10 @@ use std::collections::HashMap;
use ciphersuite::group::GroupEncoding;
use dkg::Participant;
use serai_client::{
primitives::SeraiAddress,
validator_sets::primitives::{ExternalValidatorSet, Slash},
use serai_primitives::{
BlockHash,
validator_sets::{ExternalValidatorSet, Slash},
address::SeraiAddress,
};
use serai_db::*;
@@ -25,7 +26,7 @@ use tributary_sdk::{
Transaction as TributaryTransaction, Block, TributaryReader, P2p,
};
use serai_cosign::CosignIntent;
use serai_cosign_types::CosignIntent;
use serai_coordinator_substrate::NewSetInformation;
use messages::sign::{VariantSignId, SignId};
@@ -79,7 +80,7 @@ impl CosignIntents {
fn take(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
) -> Option<CosignIntent> {
db::CosignIntents::take(txn, set, substrate_block_hash)
}
@@ -113,7 +114,7 @@ impl SubstrateBlockPlans {
pub fn set(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
plans: &Vec<[u8; 32]>,
) {
db::SubstrateBlockPlans::set(txn, set, substrate_block_hash, plans);
@@ -121,7 +122,7 @@ impl SubstrateBlockPlans {
fn take(
txn: &mut impl DbTxn,
set: ExternalValidatorSet,
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
) -> Option<Vec<[u8; 32]>> {
db::SubstrateBlockPlans::take(txn, set, substrate_block_hash)
}
@@ -574,14 +575,9 @@ impl<TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'_, TD, TDT, P> {
};
let msgs = (
decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(&data.0).unwrap(),
if data.1.is_some() {
Some(
decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(&data.1.unwrap())
.unwrap(),
)
} else {
None
},
data.1.as_ref().map(|data| {
decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(data).unwrap()
}),
);
// Since anything with evidence is fundamentally faulty behavior, not just temporal

View File

@@ -12,10 +12,9 @@ use ciphersuite::{
use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET};
use serai_primitives::{BlockHash, validator_sets::KeyShares, address::SeraiAddress};
use messages::sign::VariantSignId;
@@ -29,7 +28,7 @@ use tributary_sdk::{
use crate::db::Topic;
/// The round this data is for, within a signing protocol.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum SigningProtocolRound {
/// A preprocess.
Preprocess,
@@ -138,7 +137,7 @@ pub enum Transaction {
/// be the one selected to be cosigned.
Cosign {
/// The hash of the Substrate block to cosign
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
},
/// Note an intended-to-be-cosigned Substrate block as cosigned
@@ -176,7 +175,7 @@ pub enum Transaction {
/// cosigning the block in question, it'd be safe to provide this and move on to the next cosign.
Cosigned {
/// The hash of the Substrate block which was cosigned
substrate_block_hash: [u8; 32],
substrate_block_hash: BlockHash,
},
/// Acknowledge a Substrate block
@@ -187,7 +186,7 @@ pub enum Transaction {
/// resulting from its handling.
SubstrateBlock {
/// The hash of the Substrate block
hash: [u8; 32],
hash: BlockHash,
},
/// Acknowledge a Batch
@@ -242,19 +241,20 @@ impl TransactionTrait for Transaction {
fn kind(&self) -> TransactionKind {
match self {
Transaction::RemoveParticipant { participant, signed } => TransactionKind::Signed(
(b"RemoveParticipant", participant).encode(),
borsh::to_vec(&(b"RemoveParticipant".as_slice(), participant)).unwrap(),
signed.to_tributary_signed(0),
),
Transaction::DkgParticipation { signed, .. } => {
TransactionKind::Signed(b"DkgParticipation".encode(), signed.to_tributary_signed(0))
}
Transaction::DkgParticipation { signed, .. } => TransactionKind::Signed(
borsh::to_vec(b"DkgParticipation".as_slice()).unwrap(),
signed.to_tributary_signed(0),
),
Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => TransactionKind::Signed(
(b"DkgConfirmation", attempt).encode(),
borsh::to_vec(&(b"DkgConfirmation".as_slice(), attempt)).unwrap(),
signed.to_tributary_signed(0),
),
Transaction::DkgConfirmationShare { attempt, signed, .. } => TransactionKind::Signed(
(b"DkgConfirmation", attempt).encode(),
borsh::to_vec(&(b"DkgConfirmation".as_slice(), attempt)).unwrap(),
signed.to_tributary_signed(1),
),
@@ -264,13 +264,14 @@ impl TransactionTrait for Transaction {
Transaction::Batch { .. } => TransactionKind::Provided("Batch"),
Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed(
(b"Sign", id, attempt).encode(),
borsh::to_vec(&(b"Sign".as_slice(), id, attempt)).unwrap(),
signed.to_tributary_signed(round.nonce()),
),
Transaction::SlashReport { signed, .. } => {
TransactionKind::Signed(b"SlashReport".encode(), signed.to_tributary_signed(0))
}
Transaction::SlashReport { signed, .. } => TransactionKind::Signed(
borsh::to_vec(b"SlashReport".as_slice()).unwrap(),
signed.to_tributary_signed(0),
),
}
}
@@ -302,14 +303,14 @@ impl TransactionTrait for Transaction {
Transaction::Batch { .. } => {}
Transaction::Sign { data, .. } => {
if data.len() > usize::from(MAX_KEY_SHARES_PER_SET) {
if data.len() > usize::from(KeyShares::MAX_PER_SET) {
Err(TransactionError::InvalidContent)?
}
// TODO: MAX_SIGN_LEN
}
Transaction::SlashReport { slash_points, .. } => {
if slash_points.len() > usize::from(MAX_KEY_SHARES_PER_SET) {
if slash_points.len() > usize::from(KeyShares::MAX_PER_SET) {
Err(TransactionError::InvalidContent)?
}
}

View File

@@ -23,19 +23,12 @@ thiserror = { version = "2", default-features = false }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false, features = ["alloc"] }
borsh = { version = "1", default-features = false, features = ["derive", "de_strict_order"], optional = true }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] }
[features]
std = [
"thiserror/std",
"std-shims/std",
"borsh?/std",
"ciphersuite/std",
]
borsh = ["dep:borsh"]
default = ["std"]

View File

@@ -22,7 +22,6 @@ use ciphersuite::{
/// The ID of a participant, defined as a non-zero u16.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Zeroize)]
#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize))]
pub struct Participant(u16);
impl Participant {
/// Create a new Participant identifier from a u16.
@@ -129,18 +128,8 @@ pub enum DkgError {
NotParticipating,
}
// Manually implements BorshDeserialize so we can enforce it's a valid index
#[cfg(feature = "borsh")]
impl borsh::BorshDeserialize for Participant {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Participant::new(u16::deserialize_reader(reader)?)
.ok_or_else(|| io::Error::other("invalid participant"))
}
}
/// Parameters for a multisig.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize))]
pub struct ThresholdParams {
/// Participants needed to sign on behalf of the group.
t: u16,
@@ -210,16 +199,6 @@ impl ThresholdParams {
}
}
#[cfg(feature = "borsh")]
impl borsh::BorshDeserialize for ThresholdParams {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let t = u16::deserialize_reader(reader)?;
let n = u16::deserialize_reader(reader)?;
let i = Participant::deserialize_reader(reader)?;
ThresholdParams::new(t, n, i).map_err(|e| io::Error::other(format!("{e:?}")))
}
}
/// A method of interpolation.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub enum Interpolation<F: Zeroize + PrimeField> {

View File

@@ -10,7 +10,6 @@ ignore = [
"RUSTSEC-2022-0061", # https://github.com/serai-dex/serai/227
"RUSTSEC-2024-0370", # proc-macro-error is unmaintained
"RUSTSEC-2024-0436", # paste is unmaintained
"RUSTSEC-2025-0057", # fxhash is unmaintained, fixed with bytecodealliance/wasmtime/pull/11634
]
[licenses]
@@ -29,7 +28,6 @@ allow = [
"ISC",
"Zlib",
"Unicode-3.0",
# "OpenSSL", # Commented as it's not currently in-use within the Serai tree
"CDLA-Permissive-2.0",
# Non-invasive copyleft
@@ -73,6 +71,7 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-monero-processor" },
{ allow = ["AGPL-3.0-only"], name = "tributary-sdk" },
{ allow = ["AGPL-3.0-only"], name = "serai-cosign-types" },
{ allow = ["AGPL-3.0-only"], name = "serai-cosign" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-substrate" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-tributary" },
@@ -81,7 +80,9 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator" },
{ allow = ["AGPL-3.0-only"], name = "pallet-session" },
{ allow = ["AGPL-3.0-only"], name = "substrate-median" },
{ allow = ["AGPL-3.0-only"], name = "serai-core-pallet" },
{ allow = ["AGPL-3.0-only"], name = "serai-coins-pallet" },
{ allow = ["AGPL-3.0-only"], name = "serai-dex-pallet" },
@@ -107,6 +108,7 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-message-queue-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-processor-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-substrate-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-full-stack-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-reproducible-runtime-tests" },
]
@@ -124,12 +126,22 @@ multiple-versions = "warn"
wildcards = "warn"
highlight = "all"
deny = [
# Contains a non-reproducible binary blob
# https://github.com/serde-rs/serde/pull/2514
# https://github.com/serde-rs/serde/issues/2575
{ name = "serde_derive", version = ">=1.0.172, <1.0.185" },
# Introduced an insecure implementation of `borsh` removed with `0.15.1`
# https://github.com/rust-lang/hashbrown/issues/576
{ name = "hashbrown", version = "=0.15.0" },
# Legacy which _no one_ should use anymore
{ name = "is-terminal", version = "*" },
# Stop introduction into the tree without realizing it
{ name = "once_cell_polyfill", version = "*" },
# Conflicts with our usage of mimalloc
# https://github.com/serai-dex/serai/issues/690
{ name = "tikv-jemalloc-sys", version = "*" },
]
[sources]
@@ -140,6 +152,6 @@ allow-git = [
"https://github.com/rust-lang-nursery/lazy-static.rs",
"https://github.com/kayabaNerve/elliptic-curves",
"https://github.com/monero-oxide/monero-oxide",
"https://github.com/kayabaNerve/monero-oxide",
"https://github.com/rust-bitcoin/rust-bip39",
"https://github.com/serai-dex/patch-polkadot-sdk",
]

View File

@@ -46,7 +46,7 @@ serai-db = { path = "../common/db", optional = true }
serai-env = { path = "../common/env" }
serai-primitives = { path = "../substrate/primitives", features = ["borsh"] }
serai-primitives = { path = "../substrate/primitives", default-features = false, features = ["std"] }
[features]
parity-db = ["serai-db/parity-db"]

View File

@@ -7,7 +7,7 @@ use dalek_ff_group::Ristretto;
pub(crate) use ciphersuite::{group::GroupEncoding, WrappedGroup, GroupCanonicalEncoding};
pub(crate) use schnorr_signatures::SchnorrSignature;
pub(crate) use serai_primitives::ExternalNetworkId;
pub(crate) use serai_primitives::network_id::ExternalNetworkId;
pub(crate) use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
@@ -198,7 +198,7 @@ async fn main() {
KEYS.write().unwrap().insert(service, key);
let mut queues = QUEUES.write().unwrap();
if service == Service::Coordinator {
for network in serai_primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
queues.insert(
(service, Service::Processor(network)),
RwLock::new(Queue(db.clone(), service, Service::Processor(network))),
@@ -213,7 +213,7 @@ async fn main() {
};
// Make queues for each ExternalNetworkId
for network in serai_primitives::EXTERNAL_NETWORKS {
for network in ExternalNetworkId::all() {
// Use a match so we error if the list of NetworkIds changes
let Some(key) = read_key(match network {
ExternalNetworkId::Bitcoin => "BITCOIN_KEY",

View File

@@ -4,7 +4,7 @@ use ciphersuite::{group::GroupEncoding, FromUniformBytes, WrappedGroup, WithPref
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::ExternalNetworkId;
use serai_primitives::network_id::ExternalNetworkId;
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub enum Service {

View File

@@ -6,7 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/networks/bitcoin"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"]
edition = "2021"
rust-version = "1.85"
rust-version = "1.89"
[package.metadata.docs.rs]
all-features = true
@@ -30,8 +30,8 @@ k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic"
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.11", default-features = false, features = ["secp256k1"] }
hex = { version = "0.4", default-features = false, optional = true }
serde = { version = "1", default-features = false, features = ["derive"], optional = true }
serde_json = { version = "1", default-features = false, optional = true }
core-json-traits = { version = "0.4", default-features = false, features = ["alloc"], optional = true }
core-json-derive = { version = "0.4", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.3", default-features = false, features = ["tokio", "tls", "basic-auth"], optional = true }
[dev-dependencies]
@@ -52,15 +52,16 @@ std = [
"rand_core/std",
"bitcoin/std",
"bitcoin/serde",
"k256/std",
"frost/std",
]
rpc = [
"std",
"hex/std",
"serde/std",
"serde_json/std",
"core-json-traits",
"core-json-derive",
"simple-request",
]
hazmat = []
default = ["std"]
default = ["std", "rpc"]

View File

@@ -14,7 +14,7 @@ pub(crate) mod crypto;
/// Wallet functionality to create transactions.
pub mod wallet;
/// A minimal asynchronous Bitcoin RPC client.
#[cfg(feature = "std")]
#[cfg(feature = "rpc")]
pub mod rpc;
#[cfg(test)]

View File

@@ -1,11 +1,8 @@
use core::fmt::Debug;
use std::collections::HashSet;
use core::{str::FromStr, fmt::Debug};
use std::{io::Read, collections::HashSet};
use thiserror::Error;
use serde::{Deserialize, de::DeserializeOwned};
use serde_json::json;
use simple_request::{hyper, Request, TokioClient as Client};
use bitcoin::{
@@ -14,19 +11,12 @@ use bitcoin::{
Txid, Transaction, BlockHash, Block,
};
#[derive(Clone, PartialEq, Eq, Debug, Deserialize)]
#[derive(Clone, Debug)]
pub struct Error {
code: isize,
message: String,
}
#[derive(Clone, Debug, Deserialize)]
#[serde(untagged)]
enum RpcResponse<T> {
Ok { result: T },
Err { error: Error },
}
/// A minimal asynchronous Bitcoin RPC client.
#[derive(Clone, Debug)]
pub struct Rpc {
@@ -34,14 +24,14 @@ pub struct Rpc {
url: String,
}
#[derive(Clone, PartialEq, Eq, Debug, Error)]
#[derive(Clone, Debug, Error)]
pub enum RpcError {
#[error("couldn't connect to node")]
ConnectionError,
#[error("request had an error: {0:?}")]
RequestError(Error),
#[error("node replied with invalid JSON")]
InvalidJson(serde_json::error::Category),
InvalidJson,
#[error("node sent an invalid response ({0})")]
InvalidResponse(&'static str),
#[error("node was missing expected methods")]
@@ -66,7 +56,7 @@ impl Rpc {
Rpc { client: Client::with_connection_pool().map_err(|_| RpcError::ConnectionError)?, url };
// Make an RPC request to verify the node is reachable and sane
let res: String = rpc.rpc_call("help", json!([])).await?;
let res: String = rpc.call("help", "[]").await?;
// Verify all methods we expect are present
// If we had a more expanded RPC, due to differences in RPC versions, it wouldn't make sense to
@@ -103,18 +93,16 @@ impl Rpc {
}
/// Perform an arbitrary RPC call.
pub async fn rpc_call<Response: DeserializeOwned + Debug>(
pub async fn call<Response: 'static + Default + core_json_traits::JsonDeserialize>(
&self,
method: &str,
params: serde_json::Value,
params: &str,
) -> Result<Response, RpcError> {
let mut request = Request::from(
hyper::Request::post(&self.url)
.header("Content-Type", "application/json")
.body(
serde_json::to_vec(&json!({ "jsonrpc": "2.0", "method": method, "params": params }))
.unwrap()
.into(),
format!(r#"{{ "method": "{method}", "params": {params} }}"#).as_bytes().to_vec().into(),
)
.unwrap(),
);
@@ -129,11 +117,53 @@ impl Rpc {
.await
.map_err(|_| RpcError::ConnectionError)?;
let res: RpcResponse<Response> =
serde_json::from_reader(&mut res).map_err(|e| RpcError::InvalidJson(e.classify()))?;
#[derive(Default, core_json_derive::JsonDeserialize)]
struct InternalError {
code: Option<i64>,
message: Option<String>,
}
#[derive(core_json_derive::JsonDeserialize)]
struct RpcResponse<T: core_json_traits::JsonDeserialize> {
result: Option<T>,
error: Option<InternalError>,
}
impl<T: core_json_traits::JsonDeserialize> Default for RpcResponse<T> {
fn default() -> Self {
Self { result: None, error: None }
}
}
// TODO: `core_json::ReadAdapter`
let mut res_vec = vec![];
res.read_to_end(&mut res_vec).map_err(|_| RpcError::ConnectionError)?;
let res = <RpcResponse<Response> as core_json_traits::JsonStructure>::deserialize_structure::<
_,
core_json_traits::ConstStack<32>,
>(res_vec.as_slice())
.map_err(|_| RpcError::InvalidJson)?;
match res {
RpcResponse::Ok { result } => Ok(result),
RpcResponse::Err { error } => Err(RpcError::RequestError(error)),
RpcResponse { result: Some(result), error: None } => Ok(result),
RpcResponse { result: None, error: Some(error) } => {
let code =
error.code.ok_or_else(|| RpcError::InvalidResponse("error was missing `code`"))?;
let code = isize::try_from(code)
.map_err(|_| RpcError::InvalidResponse("error code exceeded isize::MAX"))?;
let message =
error.message.ok_or_else(|| RpcError::InvalidResponse("error was missing `message`"))?;
Err(RpcError::RequestError(Error { code, message }))
}
// `invalidateblock` yields this edge case
// TODO: https://github.com/core-json/core-json/issues/18
RpcResponse { result: None, error: None } => {
if core::any::TypeId::of::<Response>() == core::any::TypeId::of::<()>() {
Ok(Default::default())
} else {
Err(RpcError::InvalidResponse("response lacked both a result and an error"))
}
}
_ => Err(RpcError::InvalidResponse("response contained both a result and an error")),
}
}
@@ -146,16 +176,17 @@ impl Rpc {
// tip block of the current chain. The "height" of a block is defined as the amount of blocks
// present when the block was created. Accordingly, the genesis block has height 0, and
// getblockcount will return 0 when it's only the only block, despite their being one block.
self.rpc_call("getblockcount", json!([])).await
usize::try_from(self.call::<u64>("getblockcount", "[]").await?)
.map_err(|_| RpcError::InvalidResponse("latest block number exceeded usize::MAX"))
}
/// Get the hash of a block by the block's number.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
let mut hash = self
.rpc_call::<BlockHash>("getblockhash", json!([number]))
.await?
.as_raw_hash()
.to_byte_array();
let mut hash =
BlockHash::from_str(&self.call::<String>("getblockhash", &format!("[{number}]")).await?)
.map_err(|_| RpcError::InvalidResponse("block hash was not valid hex"))?
.as_raw_hash()
.to_byte_array();
// bitcoin stores the inner bytes in reverse order.
hash.reverse();
Ok(hash)
@@ -163,16 +194,25 @@ impl Rpc {
/// Get a block's number by its hash.
pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
#[derive(Default, core_json_derive::JsonDeserialize)]
struct Number {
height: usize,
height: Option<u64>,
}
Ok(self.rpc_call::<Number>("getblockheader", json!([hex::encode(hash)])).await?.height)
usize::try_from(
self
.call::<Number>("getblockheader", &format!(r#"["{}"]"#, hex::encode(hash)))
.await?
.height
.ok_or_else(|| {
RpcError::InvalidResponse("`getblockheader` did not include `height` field")
})?,
)
.map_err(|_| RpcError::InvalidResponse("block number exceeded usize::MAX"))
}
/// Get a block by its hash.
pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> {
let hex = self.rpc_call::<String>("getblock", json!([hex::encode(hash), 0])).await?;
let hex = self.call::<String>("getblock", &format!(r#"["{}", 0]"#, hex::encode(hash))).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the block"))?;
let block: Block = encode::deserialize(&bytes)
@@ -189,8 +229,13 @@ impl Rpc {
/// Publish a transaction.
pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> {
let txid = match self.rpc_call("sendrawtransaction", json!([encode::serialize_hex(tx)])).await {
Ok(txid) => txid,
let txid = match self
.call::<String>("sendrawtransaction", &format!(r#"["{}"]"#, encode::serialize_hex(tx)))
.await
{
Ok(txid) => {
Txid::from_str(&txid).map_err(|_| RpcError::InvalidResponse("TXID was not valid hex"))?
}
Err(e) => {
// A const from Bitcoin's bitcoin/src/rpc/protocol.h
const RPC_VERIFY_ALREADY_IN_CHAIN: isize = -27;
@@ -211,7 +256,8 @@ impl Rpc {
/// Get a transaction by its hash.
pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> {
let hex = self.rpc_call::<String>("getrawtransaction", json!([hex::encode(hash)])).await?;
let hex =
self.call::<String>("getrawtransaction", &format!(r#"["{}"]"#, hex::encode(hash))).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the transaction"))?;
let tx: Transaction = encode::deserialize(&bytes)

View File

@@ -14,9 +14,9 @@ pub(crate) async fn rpc() -> Rpc {
// If this node has already been interacted with, clear its chain
if rpc.get_latest_block_number().await.unwrap() > 0 {
rpc
.rpc_call(
.call(
"invalidateblock",
serde_json::json!([hex::encode(rpc.get_block_hash(1).await.unwrap())]),
&format!(r#"["{}"]"#, hex::encode(rpc.get_block_hash(1).await.unwrap())),
)
.await
.unwrap()

View File

@@ -41,21 +41,21 @@ async fn send_and_get_output(rpc: &Rpc, scanner: &Scanner, key: ProjectivePoint)
let block_number = rpc.get_latest_block_number().await.unwrap() + 1;
rpc
.rpc_call::<Vec<String>>(
.call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([
1,
&format!(
r#"[1, "{}"]"#,
Address::from_script(&p2tr_script_buf(key).unwrap(), Network::Regtest).unwrap()
]),
),
)
.await
.unwrap();
// Mine until maturity
rpc
.rpc_call::<Vec<String>>(
.call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([100, Address::p2sh(Script::new(), Network::Regtest).unwrap()]),
&format!(r#"[100, "{}"]"#, Address::p2sh(Script::new(), Network::Regtest).unwrap()),
)
.await
.unwrap();

View File

@@ -6,7 +6,7 @@ use std::{path::PathBuf, fs, process::Command};
/// Build contracts from the specified path, outputting the artifacts to the specified path.
///
/// Requires solc 0.8.26.
/// Requires solc 0.8.29.
pub fn build(
include_paths: &[&str],
contracts_path: &str,
@@ -35,8 +35,8 @@ pub fn build(
if let Some(version) = line.strip_prefix("Version: ") {
let version =
version.split('+').next().ok_or_else(|| "no value present on line".to_string())?;
if version != "0.8.26" {
Err(format!("version was {version}, 0.8.26 required"))?
if version != "0.8.29" {
Err(format!("version was {version}, 0.8.29 required"))?
}
}
}

View File

@@ -16,10 +16,12 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
subtle = { version = "2", default-features = false, features = ["std"] }
sha3 = { version = "0.10", default-features = false, features = ["std"] }
group = { version = "0.13", default-features = false, features = ["alloc"] }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] }
std-shims = { path = "../../../common/std-shims", version = "0.1", default-features = false }
subtle = { version = "2", default-features = false }
sha3 = { version = "0.10", default-features = false }
group = { version = "0.13", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic"] }
[build-dependencies]
build-solidity-contracts = { path = "../build-contracts", version = "0.1" }
@@ -40,3 +42,8 @@ alloy-provider = { version = "1", default-features = false }
alloy-node-bindings = { version = "1", default-features = false }
tokio = { version = "1", default-features = false, features = ["macros"] }
[features]
alloc = ["std-shims/alloc", "group/alloc"]
std = ["alloc", "std-shims/std", "subtle/std", "sha3/std", "k256/std"]
default = ["std"]

View File

@@ -1,5 +1,5 @@
// SPDX-License-Identifier: AGPL-3.0-only
pragma solidity ^0.8.26;
pragma solidity ^0.8.29;
/// @title A library for verifying Schnorr signatures
/// @author Luke Parker <lukeparker@serai.exchange>

View File

@@ -1,5 +1,5 @@
// SPDX-License-Identifier: AGPL-3.0-only
pragma solidity ^0.8.26;
pragma solidity ^0.8.29;
import "../Schnorr.sol";

View File

@@ -2,6 +2,7 @@
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![allow(non_snake_case)]
#![cfg_attr(not(feature = "std"), no_std)]
mod public_key;
pub use public_key::PublicKey;

View File

@@ -1,4 +1,4 @@
use std::io;
use std_shims::io;
use sha3::{Digest, Keccak256};
@@ -77,6 +77,7 @@ impl Signature {
}
/// Write the signature.
#[cfg(feature = "alloc")]
pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> {
writer.write_all(&self.to_bytes())
}

View File

@@ -1,50 +0,0 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
# This GPG-signed message exists to confirm the SHA256 sums of Monero binaries.
#
# Please verify the signature against the key for binaryFate in the
# source code repository (/utils/gpg_keys).
#
#
## CLI
6122f0bcaca12d5badd92002338847d16032f6d52d86155c203bcb67d4fe1518 monero-android-armv7-v0.18.4.2.tar.bz2
3b248c3201f028205915403b4b2f173df0dd8bf47eeb268fd67a4661251469d3 monero-android-armv8-v0.18.4.2.tar.bz2
b4e2b7de80107a1b4613b878d8e2114244b3fb16397821d69baa72d9b0f8c8d5 monero-freebsd-x64-v0.18.4.2.tar.bz2
ecb2577499a3b0901d731e11d462d3fadcd70095f3ab0def0c27ee64dc56b061 monero-linux-armv7-v0.18.4.2.tar.bz2
a39530054dac348b219f1048a24ca629da26990f72cf9c1f6b6853e3d8c39a79 monero-linux-armv8-v0.18.4.2.tar.bz2
18492ace80bf8ef2f44aa9a99b4f20adf00fd59c675a6a496211a720088d5d1a monero-linux-riscv64-v0.18.4.2.tar.bz2
41d023f2357244ea43ee0a74796f5705ce75ce7373a5865d4959fefa13ecab06 monero-linux-x64-v0.18.4.2.tar.bz2
03e77a4836861a47430664fa703dd149a355b3b214bc400b04ed38eb064a3ef0 monero-linux-x86-v0.18.4.2.tar.bz2
9b98da6911b4769abef229c20e21f29d919b11db156965d6f139d2e1ad6625c2 monero-mac-armv8-v0.18.4.2.tar.bz2
b1b1b580320118d3b6eaa5575fdbd73cf4db90fcc025b7abf875c5e5b4e335c1 monero-mac-x64-v0.18.4.2.tar.bz2
14dd5aa11308f106183dd7834aa200e74ce6f3497103973696b556e893a4fef2 monero-win-x64-v0.18.4.2.zip
934d9dbeb06ff5610d2c96ebe34fa480e74f78eaeb3fa3e47d89b7961c9bc5e0 monero-win-x86-v0.18.4.2.zip
e9ec2062b3547db58f00102e6905621116ab7f56a331e0bc9b9e892607b87d24 monero-source-v0.18.4.2.tar.bz2
#
## GUI
9d6e87add7e3ac006ee34c13c4f629252595395f54421db768f72dc233e94ea8 monero-gui-install-win-x64-v0.18.4.2.exe
e4fcdea3f0ff27c3616a8a75545f42a4e4866ea374fa2eeaa9c87027573358ea monero-gui-linux-x64-v0.18.4.2.tar.bz2
3dfee5c5d8e000c72eb3755bf0eb03ca7c5928b69c3a241e147ad22d144e00a7 monero-gui-mac-armv8-v0.18.4.2.dmg
16abadcbd608d4f7ba20d17a297f2aa2c9066d33f6f22bf3fcdca679ab603990 monero-gui-mac-x64-v0.18.4.2.dmg
4daff8850280173d46464ba9a9de7f712228ad1ef76a1c4954531e4fd2b86d86 monero-gui-win-x64-v0.18.4.2.zip
691085e61ece6c56738431f3cfd395536ca0675214e5991e0dbfab85025e82d7 monero-gui-source-v0.18.4.2.tar.bz2
#
#
# ~binaryFate
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAmitx+kACgkQ8K9NRioL
35J6cQ/7ByvGstg/a5lIYbB+Lz5bNiPozCILD9/offvC7GgOvna9rkHuofuLS+pX
qhYEMrjFjmp03XMY+i68M83qkBEZ+yU5iNDbwRuHUNMMWaaGlhnhm3nyUVtDpjjr
4xwVsee+dzi0JZhVQG7HJFURiP2Ub5Ua6bSaATDoT/aUYdhmrOnQiH2+VxogiCv3
JStDqXq6LpFjzw7UkAfxxu1PW+AQFNBzi3L0qWfzb5WWL7xuK63wXGmEkYBlvult
qt3LUhDUzMrfZ5GiiOYDEw44Y2atD4ibOYtBnllCX9CKNb0o2KKU6Qkj+CYqqtnE
uGNOt1oT09VPOtE7OUkBLVkALjef7ZXRibE7tN4wSnsrG39DP795/52L6CGJbl4n
UDnHzLCUbuvhnoAu5U+rUP5nUEDYS9ANNyj610ogNCo7YjfzLH641WSQ/UnuXKkA
RmK8xIiKoOnUeOanX99zqeXqV7gQdQMlfwLUr3pQzCI2YjdvxdRoedSEi5nX5KvO
Snf3BcCYMBemGYqVMdo95tc0Gmsw12/O8WwrBbTea+PeAXJuLaBxrLNn+RNZLfF/
UJYq2VcEwxG6vXb3cJ5lDKmRDDRI8Fxu6Amdab+6ponhM8Zy3eAynVIO952pLA7N
dtl72RsimM+sgHXP4ERYL4c6WARSHE5sAiog43dr56l3PPmM8pE=
=SoHG
-----END PGP SIGNATURE-----

View File

@@ -0,0 +1,50 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
# This GPG-signed message exists to confirm the SHA256 sums of Monero binaries.
#
# Please verify the signature against the key for binaryFate in the
# source code repository (/utils/gpg_keys).
#
#
## CLI
7c2ad18ca3a1ad5bc603630ca935a753537a38a803e98d645edd6a3b94a5f036 monero-android-armv7-v0.18.4.4.tar.bz2
eb81b71f029884ab5fec76597be583982c95fd7dc3fc5f5083a422669cee311e monero-android-armv8-v0.18.4.4.tar.bz2
bc539178df23d1ae8b69569d9c328b5438ae585c0aacbebe12d8e7d387a745b0 monero-freebsd-x64-v0.18.4.4.tar.bz2
2040dc22748ef39ed8a755324d2515261b65315c67b91f449fa1617c5978910b monero-linux-armv7-v0.18.4.4.tar.bz2
b9daede195a24bdd05bba68cb5cb21e42c2e18b82d4d134850408078a44231c5 monero-linux-armv8-v0.18.4.4.tar.bz2
c939ea6e8002798f24a56ac03cbfc4ff586f70d7d9c3321b7794b3bcd1fa4c45 monero-linux-riscv64-v0.18.4.4.tar.bz2
7fe45ee9aade429ccdcfcad93b905ba45da5d3b46d2dc8c6d5afc48bd9e7f108 monero-linux-x64-v0.18.4.4.tar.bz2
8c174b756e104534f3d3a69fe68af66d6dc4d66afa97dfe31735f8d069d20570 monero-linux-x86-v0.18.4.4.tar.bz2
645e9bbae0275f555b2d72a9aa30d5f382df787ca9528d531521750ce2da9768 monero-mac-armv8-v0.18.4.4.tar.bz2
af3d98f09da94632db3e2f53c62cc612e70bf94aa5942d2a5200b4393cd9c842 monero-mac-x64-v0.18.4.4.tar.bz2
7eb3b87a105b3711361dd2b3e492ad14219d21ed8fd3dd726573a6cbd96e83a6 monero-win-x64-v0.18.4.4.zip
a148a2bd2b14183fb36e2cf917fce6f33fb687564db2ed53193b8432097ab398 monero-win-x86-v0.18.4.4.zip
84570eee26238d8f686605b5e31d59569488a3406f32e7045852de91f35508a2 monero-source-v0.18.4.4.tar.bz2
#
## GUI
4c81c8e97bd542daa453776d888557db1ceb2a718d43f6135ad68b12c8119948 monero-gui-install-win-x64-v0.18.4.4.exe
e45cb3fa9d972d67628cfed6463fb7604ae1414a11ba449f5e2f901c769ac788 monero-gui-linux-x64-v0.18.4.4.tar.bz2
a6f071719c401df339dba2d43ec6fffe103fda3e1df46f354b2496f34bb61cc4 monero-gui-mac-armv8-v0.18.4.4.dmg
811df70811a25f31289f24ebc0edc8f7648670384698d4c768bac5c2acbf2026 monero-gui-mac-x64-v0.18.4.4.dmg
b96faa56aa77cabed1f31f3fc9496e756a8da8c1124da2b9cb0b3730a8b6fbd9 monero-gui-win-x64-v0.18.4.4.zip
a7f6b91bc9efaa83173a397614626bf7612123e0017a48f66137ac397f7d19f8 monero-gui-source-v0.18.4.4.tar.bz2
#
#
# ~binaryFate
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAmkbGLgACgkQ8K9NRioL
35LWYRAAnPeUu7TADV9Nly2gBlwu7bMK6l7pcUzs3hHhCMpg/Zb7wF8lx4D/r/hT
3wf3gNVK6tYl5GMPpF7GSKvK35SSzNN+8khRd7vhRByG75LGLnrNlcBsQU2wOzUv
Rmm2R8L8GP0B/+zXO92uJDMZ7Q7x72O+3fVX05217HBwz2kvzE1NpXe+EJPnUukA
Tr5CRnxKhxPbilvIhoEHdwkScMZqHMfsbdrefrB3KpO3xEaUz+gO9wESp7nzr4vp
Du6gJYBPK25Z2heZHCRsGN4WQP4QQv4MC0IFczc9fkVDBjywsJeNRRUbGtxR/BNt
vNJGI/kS+7KV140j6GkqAh/leZcaVJ5LRyCaHAwEQNA2T5okhrM0WZpoOAsZMi5K
bW4lNOXfWSw6/tokEPeuoi49yw0f9z0C8a4VLNOZGWKqmHcsA8WE6oVfmvVk6xWu
BqTU1Z9LJqL17GWRAReSX1ZuNA0Q0Pb/klUwP4X2afJcCVZ2YeBNr4jr21u3dYXY
QiLj0Gv7gg7a/GiMpVglNn5GzCu6mT0D94sbMNK+U5Tbve7aOtijJZ8JR62eO/mR
h+oNEys/xEcP9PQ5p74cNL71hNSfWSOcNi+GLSgXC75vsOGr7i96uaamilsHnsYB
p8PZMHzOf1pi6i/L5oOEuRgaujd9IjyCbxoYh3bbxxjBOhNEMqU=
=CVLA
-----END PGP SIGNATURE-----

View File

@@ -1,20 +1,32 @@
# rust:1.89.0-slim-bookworm as of August 1st, 2025 (GMT)
FROM --platform=linux/amd64 rust@sha256:703cfb0f80db8eb8a3452bf5151162472039c1b37fe4fb2957b495a6f0104ae7 AS deterministic
#check=skip=FromPlatformFlagConstDisallowed
# We want to explicitly set the platform to ensure a constant host environment
# Move to a Debian package snapshot
RUN rm -rf /etc/apt/sources.list.d/debian.sources && \
rm -rf /var/lib/apt/lists/* && \
echo "deb [arch=amd64] http://snapshot.debian.org/archive/debian/20250801T000000Z bookworm main" > /etc/apt/sources.list && \
apt update
# rust:1.91.1-alpine as of November 11th, 2025 (GMT)
FROM --platform=linux/amd64 rust@sha256:700c0959b23445f69c82676b72caa97ca4359decd075dca55b13339df27dc4d3
# Install dependencies
RUN apt update -y && apt upgrade -y && apt install -y clang
# In order to compile the runtime, including the `proc-macro`s and build scripts, we need the
# required development libraries. These are traditionally provided by `musl-dev` which is not
# inherently included with this image (https://github.com/rust-lang/docker-rust/issues/68). While we
# could install it here, we'd be unable to pin the installed package by its hash as desired.
#
# Rust does have self-contained libraries, intended to be used when the desired development files
# are not otherwise available. These can be enabled with `link-self-contained=yes`. Unfortunately,
# this doesn't work here (https://github.com/rust-lang/rust/issues/149371).
#
# While we can't set `link-self-contained=yes`, we can install Rust's self-contained libraries onto
# our system so they're generally available.
RUN echo '#!/bin/sh' > libs.sh
RUN echo 'set -e' >> libs.sh
RUN echo 'SYSROOT=$(rustc --print sysroot)' >> libs.sh
RUN echo 'LIBS=$SYSROOT/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained' >> libs.sh
RUN echo 'ln -s $LIBS/Scrt1.o $LIBS/crti.o $LIBS/crtn.o /usr/lib' >> libs.sh
# We also need `libc.so` which is already present on the system, just not under that name
RUN echo 'ln -s /lib/libc.musl-x86_64.so.1 /usr/lib/libc.so' >> libs.sh
RUN /bin/sh ./libs.sh
# Add the wasm toolchain
# Add the WASM toolchain
RUN rustup target add wasm32v1-none
FROM deterministic
# Add files for build
ADD patches /serai/patches
ADD common /serai/common
@@ -34,7 +46,8 @@ ADD AGPL-3.0 /serai
WORKDIR /serai
# Build the runtime, copying it to the volume if it exists
CMD cargo build --release -p serai-runtime && \
mkdir -p /volume && \
cp /serai/target/release/wbuild/serai-runtime/serai_runtime.wasm /volume/serai.wasm
# Build the runtime
RUN cargo build --release -p serai-runtime
# Copy the runtime to the provided volume
CMD ["cp", "/serai/target/release/wbuild/serai-runtime/serai_runtime.wasm", "/volume/serai.wasm"]

View File

@@ -16,9 +16,10 @@ pub fn coordinator(
) {
let db = network.db();
let longer_reattempts = if network == Network::Dev { "longer-reattempts" } else { "" };
let setup = mimalloc(Os::Debian).to_string() +
let setup = mimalloc(Os::Debian) +
&build_serai_service(
"",
Os::Debian,
network.release(),
&format!("{db} {longer_reattempts}"),
"serai-coordinator",

View File

@@ -3,8 +3,14 @@ use std::path::Path;
use crate::{Network, Os, mimalloc, os, build_serai_service, write_dockerfile};
pub fn ethereum_relayer(orchestration_path: &Path, network: Network) {
let setup = mimalloc(Os::Debian).to_string() +
&build_serai_service("", network.release(), network.db(), "serai-ethereum-relayer");
let setup = mimalloc(Os::Debian) +
&build_serai_service(
"",
Os::Debian,
network.release(),
network.db(),
"serai-ethereum-relayer",
);
let env_vars = [
("DB_PATH", "/volume/ethereum-relayer-db".to_string()),

View File

@@ -143,13 +143,20 @@ WORKDIR /home/{user}
}
}
fn build_serai_service(prelude: &str, release: bool, features: &str, package: &str) -> String {
fn build_serai_service(
prelude: &str,
os: Os,
release: bool,
features: &str,
package: &str,
) -> String {
let profile = if release { "release" } else { "debug" };
let profile_flag = if release { "--release" } else { "" };
format!(
r#"
FROM rust:1.90-slim-trixie AS builder
(match os {
Os::Debian => {
r#"
FROM rust:1.91-slim-trixie AS builder
COPY --from=mimalloc-debian libmimalloc.so /usr/lib
RUN echo "/usr/lib/libmimalloc.so" >> /etc/ld.so.preload
@@ -161,7 +168,25 @@ RUN apt install -y pkg-config libclang-dev clang
# Dependencies for the Serai node
RUN apt install -y make protobuf-compiler
"#
}
Os::Alpine => {
r#"
FROM rust:1.91-alpine AS builder
COPY --from=mimalloc-alpine libmimalloc.so /usr/lib
ENV LD_PRELOAD=libmimalloc.so
RUN apk update && apk upgrade
# Add dev dependencies
RUN apk add clang-dev
"#
}
})
.to_owned() +
&format!(
r#"
# Add the wasm toolchain
RUN rustup target add wasm32v1-none
@@ -195,7 +220,7 @@ RUN --mount=type=cache,target=/root/.cargo \
cargo build {profile_flag} --features "{features}" -p {package} && \
mv /serai/target/{profile}/{package} /serai/bin
"#
)
)
}
pub fn write_dockerfile(path: PathBuf, dockerfile: &str) {

View File

@@ -13,8 +13,8 @@ pub fn message_queue(
ethereum_key: <Ristretto as WrappedGroup>::G,
monero_key: <Ristretto as WrappedGroup>::G,
) {
let setup = mimalloc(Os::Debian).to_string() +
&build_serai_service("", network.release(), network.db(), "serai-message-queue");
let setup = mimalloc(Os::Alpine) +
&build_serai_service("", Os::Alpine, network.release(), network.db(), "serai-message-queue");
let env_vars = [
("COORDINATOR_KEY", hex::encode(coordinator_key.to_bytes())),
@@ -41,7 +41,7 @@ CMD {env_vars_str} serai-message-queue
"#
);
let run = os(Os::Debian, "", "messagequeue") + &run_message_queue;
let run = os(Os::Alpine, "", "messagequeue") + &run_message_queue;
let res = setup + &run;
let mut message_queue_path = orchestration_path.to_path_buf();

View File

@@ -1,36 +1,85 @@
use crate::Os;
pub fn mimalloc(os: Os) -> &'static str {
const ALPINE_MIMALLOC: &str = r#"
// 2.2.4
const MIMALLOC_VERSION: &str = "fbd8b99c2b828428947d70fdc046bb55609be93e";
const FLAGS: &str =
"-DMI_SECURE=ON -DMI_GUARDED=ON -DMI_BUILD_STATIC=OFF -DMI_BUILD_OBJECT=OFF -DMI_BUILD_TESTS=OFF";
pub fn mimalloc(os: Os) -> String {
let build_script = |env, flags| {
format!(
r#"
#!/bin/sh
set -e
git clone https://github.com/microsoft/mimalloc
cd mimalloc
git checkout {MIMALLOC_VERSION}
# For some reason, `mimalloc` contains binary blobs in the repository, so we remove those now
rm -rf .git ./bin
mkdir -p out/secure
cd out/secure
# `CMakeLists.txt` requires a C++ compiler but `mimalloc` does not use one by default. We claim
# there is a working C++ compiler so CMake doesn't complain, allowing us to not unnecessarily
# install one. If it was ever invoked, our choice of `false` would immediately let us know.
# https://github.com/microsoft/mimalloc/issues/1179
{env} CXX=false cmake -DCMAKE_CXX_COMPILER_WORKS=1 {FLAGS} ../..
make
cd ../..
# Copy the built library to the original directory
cd ..
cp mimalloc/out/secure/libmimalloc-secure.so ./libmimalloc.so
# Clean up the source directory
rm -rf ./mimalloc
"#
)
};
let build_commands = |env, flags| {
let mut result = String::new();
for line in build_script(env, flags)
.lines()
.map(|line| {
assert!(!line.contains('"'));
format!(r#"RUN echo "{line}" >> ./mimalloc.sh"#)
})
.chain(["RUN /bin/sh ./mimalloc.sh", "RUN rm ./mimalloc.sh"].into_iter().map(str::to_string))
{
result.push_str(&line);
result.push('\n');
}
result
};
let alpine_build = build_commands("CC=$(uname -m)-alpine-linux-musl-gcc", "-DMI_LIBC_MUSL=ON");
let debian_build = build_commands("", "");
let alpine_mimalloc = format!(
r#"
FROM alpine:latest AS mimalloc-alpine
RUN apk update && apk upgrade && apk --no-cache add gcc g++ libc-dev make cmake git
RUN git clone https://github.com/microsoft/mimalloc && \
cd mimalloc && \
git checkout 43ce4bd7fd34bcc730c1c7471c99995597415488 && \
mkdir -p out/secure && \
cd out/secure && \
cmake -DMI_SECURE=ON ../.. && \
make && \
cp ./libmimalloc-secure.so ../../../libmimalloc.so
"#;
RUN apk update && apk upgrade && apk --no-cache add musl-dev gcc make cmake git
const DEBIAN_MIMALLOC: &str = r#"
{alpine_build}
"#
);
let debian_mimalloc = format!(
r#"
FROM debian:trixie-slim AS mimalloc-debian
RUN apt update && apt upgrade -y && apt install -y gcc g++ make cmake git
RUN git clone https://github.com/microsoft/mimalloc && \
cd mimalloc && \
git checkout 43ce4bd7fd34bcc730c1c7471c99995597415488 && \
mkdir -p out/secure && \
cd out/secure && \
cmake -DMI_SECURE=ON ../.. && \
make && \
cp ./libmimalloc-secure.so ../../../libmimalloc.so
"#;
RUN apt update && apt upgrade -y && apt install -y gcc make cmake git
{debian_build}
"#
);
match os {
Os::Alpine => ALPINE_MIMALLOC,
Os::Debian => DEBIAN_MIMALLOC,
Os::Alpine => alpine_mimalloc,
Os::Debian => debian_mimalloc,
}
}

View File

@@ -7,7 +7,7 @@ pub fn bitcoin(orchestration_path: &Path, network: Network) {
const DOWNLOAD_BITCOIN: &str = r#"
FROM alpine:latest AS bitcoin
ENV BITCOIN_VERSION=29.1
ENV BITCOIN_VERSION=30.0
RUN apk --no-cache add wget git gnupg
@@ -29,7 +29,7 @@ RUN tar xzvf bitcoin-${BITCOIN_VERSION}-$(uname -m)-linux-gnu.tar.gz
RUN mv bitcoin-${BITCOIN_VERSION}/bin/bitcoind .
"#;
let setup = mimalloc(Os::Debian).to_string() + DOWNLOAD_BITCOIN;
let setup = mimalloc(Os::Debian) + DOWNLOAD_BITCOIN;
let run_bitcoin = format!(
r#"

View File

@@ -17,7 +17,7 @@ pub fn ethereum(orchestration_path: &Path, network: Network) {
(reth(network), nimbus(network))
};
let download = mimalloc(Os::Alpine).to_string() + &el_download + &cl_download;
let download = mimalloc(Os::Alpine) + &el_download + &cl_download;
let run = format!(
r#"
@@ -26,7 +26,7 @@ CMD ["/run.sh"]
"#,
network.label()
);
let run = mimalloc(Os::Debian).to_string() +
let run = mimalloc(Os::Debian) +
&os(Os::Debian, &(el_run_as_root + "\r\n" + &cl_run_as_root), "ethereum") +
&el_run +
&cl_run +

View File

@@ -10,7 +10,7 @@ fn monero_internal(
monero_binary: &str,
ports: &str,
) {
const MONERO_VERSION: &str = "0.18.4.2";
const MONERO_VERSION: &str = "0.18.4.4";
let arch = match std::env::consts::ARCH {
// We probably would run this without issues yet it's not worth needing to provide support for
@@ -41,7 +41,7 @@ RUN tar -xvjf monero-linux-{arch}-v{MONERO_VERSION}.tar.bz2 --strip-components=1
network.label(),
);
let setup = mimalloc(os).to_string() + &download_monero;
let setup = mimalloc(os) + &download_monero;
let run_monero = format!(
r#"

View File

@@ -17,17 +17,18 @@ pub fn processor(
substrate_evrf_key: Zeroizing<Vec<u8>>,
network_evrf_key: Zeroizing<Vec<u8>>,
) {
let setup = mimalloc(Os::Debian).to_string() +
let setup = mimalloc(Os::Debian) +
&build_serai_service(
if coin == "ethereum" {
r#"
RUN cargo install svm-rs
RUN svm install 0.8.26
RUN svm use 0.8.26
RUN svm install 0.8.29
RUN svm use 0.8.29
"#
} else {
""
},
Os::Debian,
network.release(),
&format!("binaries {} {coin}", network.db()),
"serai-processor",

Some files were not shown because too many files have changed in this diff Show More