165 Commits

Author SHA1 Message Date
Luke Parker
0849d60f28 Run Bitcoin, Monero nodes on Alpine
While prior this didn't work well, presumably due to stack size limitations,
a shell script is included to raise the default stack size limit. This should
be tried again.
2025-12-08 02:30:34 -05:00
Luke Parker
3a792f9ce5 Update documentation on the serai-runtime build.rs 2025-12-08 02:22:29 -05:00
Luke Parker
50959fa0e3 Update the polkadot-sdk used
Removes `parity-wasm` as a dependency, closing
https://github.com/serai-dex/issues/227 and tidying our `deny.toml`.

This removes the `import-memory` flag from the linker as part of
`parity-wasm`'s usage was to map imports into exports
(5a1128b94b/substrate/client/executor/common/src/runtime_blob/runtime_blob.rs (L91-L142)).
2025-12-08 02:22:25 -05:00
Luke Parker
2fb90ebe55 Extend crates we patch to be empty from the Ethereum ecosystem
`ruint` pulls in many versions of many crates. This has it pull in less.
2025-12-06 08:27:34 -05:00
Luke Parker
b24adcbd14 Add panic-on-poison to no-std std_shims::sync::Mutex
We already had this behavior on `std`. It was omitted when no-`std` due to
deferring to `spin::Mutex`, which does not track poisoning at all. This
increases the parity of the two.

Part of https://github.com/serai-dex/serai/issues/698.
2025-12-06 08:06:38 -05:00
Luke Parker
b791256648 Remove substrate-wasm-builder
By defining our own build script, we gain complete clarity and control over how
the WASM is built. This also removes the need to patch the upstream due to it
allowing pollution of the environment variables from the host.

Notable appreciation is given to
https://github.com/rust-lang/rust/issues/145491 for identifying an issue
encountered here, with the associated PR clarifying the necessary flags for the
linker to fix this.
2025-12-04 23:23:38 -05:00
Luke Parker
36ac9c56a4 Remove workaround for lack of musl-dev now that musl-dev is provided in Rust Alpine images
Additionally, optimizes the build process a bit via leaving only the runtime
(and `busybox`) in the final image, and additionally building the runtime
without `std` (as we solely need the WASM blob from this process).
2025-12-04 11:58:38 -05:00
Luke Parker
57bf4984f8 panic = "abort"
`panic = "unwind"` was originally a requirement of Substrate, notably due to
its [native runtime](https://github.com/paritytech/substrate/issues/10874).
This does not mean all of Serai should use this setting however.

As the native runtime has been removed, we do no longer need this for the
Substrate node. With a review of our derivative, a panic guard is only used
when fetching the version from the runtime, causing an error on boot if a
panic occurs. Accordingly, we shouldn't have a need for `panic = "unwind"`
within the node, and the runtime itself should be fine.

The rest of Serai's services already registered bespoke hooks to ensure any
panic caused the process to exit. Those are left as-is, even though they're
now unnecessary.
2025-12-04 11:58:38 -05:00
Luke Parker
87750407de cargo-deny 0.18.8, remove bip39 git dependency
The former is necessary due to `cargo-deny` misinterpreting select licenses.
The latter is finally possible with the recent 2.2.1 release 🎉
2025-12-04 11:58:28 -05:00
Luke Parker
3ce90c55d9 Define a 512 KiB block size limit 2025-12-02 21:24:05 -05:00
Luke Parker
ff95c58341 Round out the runtime
Ensures the block's size limit is respected.

Defines a policy for weights. While I'm unsure I want to commit to this
forever, I do want to acknowledge it's valid and well-defined.

Cleans up the `serai-runtime` crate a bit with further modules in the `wasm`
folder.
2025-12-02 21:16:34 -05:00
Luke Parker
98044f93b1 Stub the in-instructions pallet 2025-12-02 16:46:10 -05:00
Luke Parker
eb04f873d5 Stub the genesis-liquidity pallet 2025-12-02 16:46:06 -05:00
Luke Parker
af74c318aa Add event emissions to the DEX pallet 2025-12-02 13:31:33 -05:00
Luke Parker
d711d8915f Update docs Ruby/gem versions 2025-12-02 13:20:17 -05:00
Luke Parker
3d549564a8 Misc tweaks in the style of the last commit
Notably removes the `kvdb-rocksdb` patch via updating the Substrate version
used to one which disables the `jemalloc` feature itself.

Simplifies the path of the built WASM file within the Dockerfile to consumers.
This also ensures if the image is built, the path of the WASM file is as
expected (prior unasserted).
2025-12-02 09:10:44 -05:00
Luke Parker
9a75f92864 Thoroughly update versions and methodology
For hash-pinned dependencies, adds comments documenting the associated
versions.

Adds a pin to `slither-analyzer` which was prior missing.

Updates to Monero 0.18.4.4.

`mimalloc` now has the correct option set when building for `musl`. A C++
compiler is no longer required in its Docker image.

The runtime's `Dockerfile` now symlinks a `libc.so` already present on the
image instead of creating one itself. It also builds the runtime within the
image to ensure it only happens once. The test to ensure the methodology is
reproducible has been updated to not simply create containers from the image,
yet rebuild the image entirely, accordingly. This also is more robust and
arguably should have already been done.

The pin to the exact hash of the `patch-polkadot-sdk` repo in every
`Cargo.toml` has been removed. The lockfile already serves that role,
simplifying updating in the future.

The latest Rust nightly is adopted as well (superseding
https://github.com/serai-dex/serai/pull/697).

The `librocksdb-sys` patch is replaced with a `kvdb-rocksdb` patch, removing a
git dependency, thanks to https://github.com/paritytech/parity-common/pull/950.
2025-12-01 18:17:01 -05:00
Luke Parker
30ea9d9a06 Tidy the DEX pallet 2025-11-30 21:42:27 -05:00
Luke Parker
c45c973ca1 Remove musl-dev from runtime/Dockerfile
It wasn't pinned with a hash yet with a version tag. This ensures we are
deterministic to the image (specified by hash), `Cargo.lock`, and source code
alone.

Unfortunately, this was incredibly annoying to do, the exact process uncovering
a SIGSEGV in stable Rust. The extensive documentation details the solution.
Thankfully, it works now.
2025-11-27 03:37:37 -05:00
Luke Parker
6e37ac030d Add patch for alloy-eip2124 to an empty crate
Removes the `crc` dependency which had a unique author associated.
2025-11-26 17:01:03 -05:00
Luke Parker
e7c759c468 Improve substrate-median tests
The use of a dedicated test module ensures the API doesn't hide anything which
needs to be public. There's also now explicit tests for when the median is the
popped value.
2025-11-25 23:46:12 -05:00
Luke Parker
8ec0582237 Add module to calculate medians 2025-11-25 22:39:52 -05:00
Luke Parker
8d8e8a7a77 Remove unnecessary MSRVs from patches/ 2025-11-25 17:05:30 -05:00
Luke Parker
028ec3cce0 borsh 1.6.0
Bumps th MSRV for some of our crates, which is fine.
2025-11-25 16:58:19 -05:00
Luke Parker
c49215805f Update Substrate 2025-11-25 00:06:54 -05:00
Luke Parker
2ffdd2a01d Update monero-oxide, Substrate 2025-11-22 11:49:25 -05:00
Luke Parker
e1e6e67d4a Ensure desired pruning behavior is held within the node 2025-11-18 21:46:58 -05:00
Luke Parker
6b19780c7b Remove historical state access from Serai
Resolves https://github.com/serai-dex/serai/issues/694.
2025-11-18 21:27:47 -05:00
Luke Parker
6100c3ca90 Restore patches/dalek-ff-group
Ensures `crypto/dalek-ff-group` is pure.
2025-11-16 19:04:57 -05:00
Luke Parker
fa0ed4b180 Add validator sets RPC functions necessary for the coordinator 2025-11-16 17:38:08 -05:00
Luke Parker
0ea16f9e01 doc_auto_cfg -> doc_cfg 2025-11-16 17:38:08 -05:00
Luke Parker
7a314baa9f Update all of serai-coordinator to compile with the new serai-client-serai 2025-11-16 17:38:03 -05:00
Luke Parker
9891ccade8 Add From<*::Call> for Call to serai-abi 2025-11-16 16:43:06 -05:00
Luke Parker
f1f166c168 Restore publish_transaction RPC to Serai 2025-11-16 16:43:06 -05:00
Luke Parker
df4aee2d59 Update serai-client to solely be an umbrella crate of the dedicated client libraries 2025-11-16 16:43:05 -05:00
Luke Parker
302a43653f Add helper to get TemporalSerai as of the latest finalized block 2025-11-16 16:42:36 -05:00
Luke Parker
d219b77bd0 Add bindings to the events from the coins module to serai-client-serai 2025-11-15 16:13:25 -05:00
Luke Parker
fce26eaee1 Update the block RPCs to return null when missing, not an error
Promotes clarity.
2025-11-15 16:12:39 -05:00
Luke Parker
3cfbd9add7 Update serai-coordinator-cosign to the new serai-client-serai 2025-11-15 16:10:58 -05:00
Luke Parker
609cf06393 Update SetDecided to include the validators
Necessary for light-client protocols to follow along with consensus. Arguably,
anyone handling GRANDPA's consensus could peek at this through the consensus
commit anyways, but we shouldn't so defer that.
2025-11-15 14:59:33 -05:00
Luke Parker
46b1f1b7ec Add test for the integrity of headers 2025-11-14 12:04:21 -05:00
Luke Parker
09113201e7 Fixes to the validator sets RPC 2025-11-14 11:19:02 -05:00
Luke Parker
556d294157 Add pallet-timestamp
Ensures the timestamp is sent, within expected parameters, and the correctness
in relation to `pallet-babe`.
2025-11-14 09:59:32 -05:00
Luke Parker
82ca889ed3 Wrap Proposer so we can add the SeraiPreExecutionDigest (timestamps) 2025-11-14 08:02:54 -05:00
Luke Parker
cde0f753c2 Correct Serai header on genesis
Includes a couple misc fixes for the RPC as well.
2025-11-14 07:22:59 -05:00
Luke Parker
6ff0ef7aa6 Type the errors yielded by serai-node's RPC 2025-11-14 03:52:35 -05:00
Luke Parker
f9e3d1b142 Expand validator sets API with the rest of the events and some getters
We could've added a storage API, and fetched fields that way, except we want
the storage to be opaque. That meant we needed to add the RPC routes to the
node, which also simplifies other people writing RPC code and fetching these
fields. Then the node could've used the storage API, except a lot of the
storage in validator-sets is marked opaque and to only be read via functions,
so extending the runtime made the most sense.
2025-11-14 03:37:06 -05:00
Luke Parker
a793aa18ef Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-13 05:48:18 -05:00
Luke Parker
5662beeb8a Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-13 05:25:41 -05:00
Luke Parker
509bd58f4e Add method to fetch a block's events to the RPC 2025-11-13 05:13:55 -05:00
Luke Parker
367a5769e8 Update deny.toml 2025-11-13 00:37:51 -05:00
Luke Parker
cb6eb6430a Update version of substrate 2025-11-13 00:17:19 -05:00
Luke Parker
4f82e5912c Use Alpine to build the runtime
Smaller, works without issue.
2025-11-12 23:02:22 -05:00
Luke Parker
ac7af40f2e Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-12 23:00:57 -05:00
Luke Parker
264bdd46ca Update serai-runtime to compile a minimum subset of itself for non-WASM targets
We only really care about it as a WASM blob, given `serai-abi`, so there's no
need to compile it twice when it's an expensive blob and we don't care about it
at all.
2025-11-12 23:00:00 -05:00
Luke Parker
c52f7634de Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-11 23:04:05 -05:00
Luke Parker
21eaa5793d Error when not running the dev network if an explicit WASM runtime isn't provided 2025-11-11 23:02:20 -05:00
Luke Parker
c744a80d80 Check the finalized function doesn't claim unfinalized blocks are in fact finalized 2025-11-11 23:02:20 -05:00
Luke Parker
a34f9f6164 Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-11 23:02:20 -05:00
Luke Parker
353683cfd2 revm 33 2025-11-11 23:02:16 -05:00
Luke Parker
d4f77159c4 Rust 1.91.1 due to the regression re: wasm builds 2025-11-11 09:08:30 -05:00
Luke Parker
191bf4bdea Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-10 06:34:33 -05:00
Luke Parker
06a4824aba Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-10 05:31:13 -05:00
Luke Parker
e65a37e639 Update various versions 2025-11-10 04:02:02 -05:00
Luke Parker
4653ef4a61 Use dockertest for the newly added serai-client-serai test 2025-11-07 02:08:02 -05:00
Luke Parker
ce08fad931 Add initial basic tests for serai-client-serai 2025-11-06 20:12:37 -05:00
Luke Parker
1866bb7ae3 Begin work on the new RPC for the new node 2025-11-06 03:08:43 -05:00
Luke Parker
aff2065c31 Polkadot stable2509-1 2025-11-06 00:23:35 -05:00
Luke Parker
7300700108 Update misc versions 2025-11-05 19:11:33 -05:00
Luke Parker
31874ceeae serai-node which compiles and produces/finalizes blocks with --dev 2025-11-05 18:20:23 -05:00
Luke Parker
012b8fddae Get serai-node to compile again 2025-11-05 01:18:21 -05:00
Luke Parker
d2f58232c8 Tweak serai-coordinator-cosign to make it closer to compiling again
Adds `PartialOrd, Ord` derivations to some items in `serai-primitives` so they
may be used as keys within `borsh` maps.
2025-11-04 19:42:23 -05:00
Luke Parker
49794b6a75 Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-04 19:28:26 -05:00
Luke Parker
973287d0a1 Smash serai-client so the processors don't need the entire lib to access their specific code
We prior controlled this with feature flags. It's just better to define their
own crates.
2025-11-04 19:27:53 -05:00
Luke Parker
1b499edfe1 Misc fixes so this compiles 2025-11-04 18:56:56 -05:00
Luke Parker
642848bd24 Bump revm 2025-11-04 13:31:46 -05:00
Luke Parker
f7fb78bdd6 Merge branch 'next' into next-polkadot-sdk 2025-11-04 13:24:00 -05:00
Luke Parker
65613750e1 Merge branch 'next' into next-polkadot-sdk 2025-11-04 12:06:13 -05:00
Luke Parker
56f6ba2dac Merge branch 'develop' into next-polkadot-sdk 2025-10-05 06:04:17 -04:00
Luke Parker
08f6af8bb9 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-09-27 02:07:18 -04:00
Luke Parker
3512b3832d Don't actually define a pallet for pallet-session
We need its `Config`, not it as a pallet. As it has no storage, no calls, no
events, this is fine.
2025-09-26 23:08:38 -04:00
Luke Parker
1164f92ea1 Update usage of now-associated const in processor/key-gen 2025-09-26 22:48:52 -04:00
Luke Parker
0a3ead0e19 Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-09-26 22:47:47 -04:00
Luke Parker
ea66cd0d1a Update build-dependencies CI action 2025-09-21 15:40:15 -04:00
Luke Parker
8b32fba458 Minor cargo update 2025-09-21 15:40:05 -04:00
Luke Parker
e63acf3f67 Restore a runtime which compiles
Adds BABE, GRANDPA, to the runtime definition and a few stubs for not yet
implemented interfaces.
2025-09-21 13:16:43 -04:00
Luke Parker
d373d2a4c9 Restore AllowMint to serai-validator-sets-pallet and reorganize TODOs 2025-09-20 04:45:28 -04:00
Luke Parker
cbf998ff30 Restore report_slashes
This does not yet handle the `SlashReport`. It solely handles the routing for
it.
2025-09-20 04:16:01 -04:00
Luke Parker
ef07253a27 Restore the event to serai-validator-sets-pallet 2025-09-20 03:42:24 -04:00
Luke Parker
ffae6753ec Restore the set_keys call 2025-09-20 03:04:26 -04:00
Luke Parker
a04215bc13 Remove commented-out slashing code from serai-validator-sets-pallet
Deferred to https://github.com/serai-dex/serai/issues/657.
2025-09-20 03:04:19 -04:00
Luke Parker
28aea8a442 Incorporate check a validator won't prevent ever not having a single point of failure 2025-09-20 01:58:39 -04:00
Luke Parker
7b46477ca0 Add explicit hook for deciding whether to include the genesis validators 2025-09-20 01:57:55 -04:00
Luke Parker
e62b62ddfb Restore usage of pallet-grandpa to serai-validator-sets-pallet 2025-09-20 01:36:11 -04:00
Luke Parker
a2d8d0fd13 Restore integration with pallet-babe to serai-validator-sets-pallet 2025-09-20 01:23:02 -04:00
Luke Parker
b2b36b17c4 Restore GenesisConfig to the validator sets pallet 2025-09-20 00:06:19 -04:00
Luke Parker
9de8394efa Emit events within the signals pallet 2025-09-19 22:44:29 -04:00
Luke Parker
3cb9432daa Have the coins pallet emit events via serai_core_pallet
`serai_core_pallet` solely defines an accumulator for the events. We use the
traditional `frame_system::Events` to store them for now and enable retrieval.
2025-09-19 22:18:55 -04:00
Luke Parker
3f5150b3fa Properly define the core pallet instead of placing it within the runtime 2025-09-19 19:05:47 -04:00
Luke Parker
d74b00b9e4 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:09:22 -04:00
Luke Parker
3955f92cc2 Merge branch 'next' into next-polkadot-sdk 2025-09-18 18:19:14 -04:00
Luke Parker
a1ef18a039 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:03:16 -04:00
Luke Parker
bec806230a Misc updates 2025-09-18 16:25:33 -04:00
Luke Parker
8bafeab5b3 Tidy serai-signals-pallet
Adds `serai-validator-sets-pallet` and `serai-signals-pallet` to the runtime.
2025-09-16 08:45:02 -04:00
Luke Parker
3722df7326 Introduce KeyShares struct to represent the amount of key shares
Improvements, bug fixes associated.
2025-09-16 08:45:02 -04:00
Luke Parker
ddb8e1398e Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-16 08:45:02 -04:00
Luke Parker
2be69b23b1 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-16 08:45:02 -04:00
Luke Parker
a82ccadbb0 Correct std-shims feature flagging 2025-09-16 08:45:02 -04:00
Luke Parker
1ff2934927 cargo update 2025-09-16 08:44:54 -04:00
Luke Parker
cd4ffa862f Remove coins, validator-sets use of Substrate's event system
We've defined our own.
2025-09-15 21:32:20 -04:00
Luke Parker
c0a4d85ae6 Restore claim_deallocation call to validator-sets pallet 2025-09-15 21:32:01 -04:00
Luke Parker
55e845fe12 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-15 21:24:10 -04:00
Luke Parker
5ea087d177 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-14 08:55:40 -04:00
Luke Parker
dd7dc0c1dc Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-12 18:26:27 -04:00
Luke Parker
c83fbb3e44 Expand std_shims::prelude to better match std::prelude 2025-09-12 18:24:56 -04:00
Luke Parker
befbbbfb84 Add the ability to bound the response's size limit to simple-request 2025-09-11 17:24:47 -04:00
Luke Parker
d0f497dc68 Latest patch-polkadot-sdk 2025-09-10 10:02:24 -04:00
Luke Parker
1b755a5d48 patch-polkadot-sdk enabling libp2p 0.56 2025-09-06 17:41:49 -04:00
Luke Parker
e5efcd56ba Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-06 14:43:21 -04:00
Luke Parker
5d60b3c2ae Update parity-db in serai-db
This synchronizes with an update to `patch-polkadot-sdk`.
2025-09-06 14:28:42 -04:00
Luke Parker
ae923b24ff Update `patch-polkadot-sdk
Allows using `libp2p 0.55`.
2025-09-06 14:04:55 -04:00
Luke Parker
d304cd97e1 Merge branch 'next' into next-polkadot-sdk 2025-09-06 04:26:10 -04:00
Luke Parker
2b56dcdf3f Update patch-polkadot-sdk for bug fixes, removal of is-terminal
Adds a deny entry for `is-terminal` to stop it from secretly reappearing.

Restores the `is-terminal` patch for `is_terminal_polyfill` to have one less
external dependency.
2025-09-06 04:25:21 -04:00
Luke Parker
90804c4c30 Update deny.toml 2025-09-05 14:08:04 -04:00
Luke Parker
46caca2f51 Update patch-polkadot-sdk to remove scale_info 2025-09-05 14:07:52 -04:00
Luke Parker
2077e485bb Add borsh impls for SignedEmbeddedEllipticCurveKeys 2025-09-05 07:21:07 -04:00
Luke Parker
28dbef8a1c Update to the latest patch-polkadot-sdk
Removes several dependencies.
2025-09-05 06:57:30 -04:00
Luke Parker
3541197aa5 Merge branch 'next' into next-polkadot-sdk 2025-09-03 16:44:26 -04:00
Luke Parker
a2209dd6ff Misc clippy fixes 2025-09-03 06:10:54 -04:00
Luke Parker
2032cf355f Expose coins::Pallet::transfer_internal as transfer_fn
It is safe to call and assumes no preconditions.
2025-09-03 00:48:17 -04:00
Luke Parker
fe41b09fd4 Properly handle the error in validator-sets 2025-09-02 11:07:45 -04:00
Luke Parker
74bad049a7 Add abstraction for the embedded elliptic curve keys
It's minimal but still pleasant.
2025-09-02 10:42:06 -04:00
Luke Parker
72fefb3d85 Strongly type EmbeddedEllipticCurveKeys
Adds a signed variant to validate knowledge and ownership.

Add SCALE derivations for `EmbeddedEllipticCurveKeys`
2025-09-02 10:42:02 -04:00
Luke Parker
200c1530a4 WIP changes to validator-sets
Actually use the added `Allocations` abstraction

Start using the sessions API in the validator-sets pallet

Get a `substrate/validator-sets` approximate to compiling
2025-09-02 10:41:58 -04:00
Luke Parker
5736b87b57 Remove final references to scale in coordinator/processor
Slight tweaks to processor
2025-09-02 10:41:55 -04:00
Luke Parker
ada94e8c5d Get all processors to compile again
Requires splitting `serai-cosign` into `serai-cosign` and `serai-cosign-types`
so the processor don't require `serai-client/serai` (not correct yet).
2025-09-02 02:17:10 -04:00
Luke Parker
75240ed327 Update serai-message-queue to the new serai-primitives 2025-09-02 02:17:10 -04:00
Luke Parker
6177cf5c07 Have serai-runtime compile again 2025-09-02 02:17:10 -04:00
Luke Parker
0d38dc96b6 Use serai-primitives, not serai-client, when possible in coordinator/*
Also updates `serai-coordinator-tributary` to prefer `borsh` to SCALE.
2025-09-02 02:17:10 -04:00
Luke Parker
e8094523ff Use borsh instead of SCALE within tendermint-machine, tributary-sdk
Not only does this follow our general practice, the latest SCALE has a
possibly-lossy truncation in its current implementation for `enum`s I'd like to
avoid without simply silencing.
2025-09-02 02:17:09 -04:00
Luke Parker
53a64bc7e2 Update serai-abi, and dependencies, to patch-polkadot-sdk 2025-09-02 02:17:09 -04:00
Luke Parker
3c6e889732 Update Cargo.lock after rebase 2025-08-30 19:36:46 -04:00
Luke Parker
354efc0192 Add deallocate function to validator-sets session abstraction 2025-08-30 18:34:20 -04:00
Luke Parker
e20058feae Add a Sessions abstraction for validator-sets storage 2025-08-30 18:34:20 -04:00
Luke Parker
09f0714894 Add a dedicated Allocations struct for managing validator set allocations
Part of the DB abstraction necessary for this spaghetti.
2025-08-30 18:34:15 -04:00
Luke Parker
d3d539553c Restore the coins pallet to the runtime 2025-08-30 18:32:26 -04:00
Luke Parker
b08ae8e6a7 Add a non-canonical SCALE derivations feature
Enables representing IUMT within `StorageValues`. Applied to a variety of
values.

Fixes a bug where `Some([0; 32])` would be considered a valid block anchor.
2025-08-30 18:32:21 -04:00
Luke Parker
35db2924b4 Populate UnbalancedMerkleTrees in headers 2025-08-30 18:32:20 -04:00
Luke Parker
bfff823bf7 Add an UnbalancedMerkleTree primitive
The reasoning for it is documented with itself. The plan is to use it within
our header for committing to the DAG (allowing one header per epoch, yet
logarithmic proofs for any header within the epoch), the transactions
commitment (allowing logarithmic proofs of a transaction within a block,
without padding), and the events commitment (allowing logarithmic proofs of
unique events within a block, despite events not having a unique ID inherent).

This also defines transaction hashes and performs the necessary modifications
for transactions to be unique.
2025-08-30 18:32:16 -04:00
Luke Parker
352af85498 Use borsh entirely in create_db 2025-08-30 18:32:07 -04:00
Luke Parker
ecad89b269 Remove now-consolidated primitives crates 2025-08-30 18:32:06 -04:00
Luke Parker
48f5ed71d7 Skeleton ruintime with new types 2025-08-30 18:30:38 -04:00
Luke Parker
ed9cbdd8e0 Have apply return Ok even if calls failed
This ensures fees are paid, and block building isn't interrupted, even for TXs
which error.
2025-08-30 18:27:23 -04:00
Luke Parker
0ac11defcc Serialize BoundedVec not with a u32 length, but the minimum-viable uN where N%8==0
This does break borsh's definition of a Vec EXCEPT if the BoundedVec is
considered an enum. For sufficiently low bounds, this is viable, though it
requires automated code generation to be sane.
2025-08-30 18:27:23 -04:00
Luke Parker
24e89316d5 Correct distinction/flow of check/validate/apply 2025-08-30 18:27:23 -04:00
Luke Parker
3f03dac050 Make transaction an enum of Unsigned, Signed 2025-08-30 18:27:23 -04:00
Luke Parker
820b710928 Remove RuntimeCall from Transaction
I believe this was originally here as we needed to return a reference, not an
owned instance, so this caching enabled returning a reference? Regardless, it
isn't valuable now.
2025-08-30 18:27:23 -04:00
Luke Parker
88c7ae3e7d Add traits necessary for serai_abi::Transaction to be usable in-runtime 2025-08-30 18:27:22 -04:00
Luke Parker
dd5e43760d Add the UNIX timestamp (in milliseconds to the block
This is read from the BABE pre-digest when converting from a SubstrateHeader.
This causes the genesis block to have time 0 and all blocks produced with BABE
to have a time of the slot time. While the slot time is in 6-second intervals
(due to our target block time), defining in milliseconds preserves the ABI for
long-term goals (sub-second blocks).

Usage of the slot time deduplicates this field with BABE, and leaves the only
possible manipulation to propose during a slot or to not propose during a slot.

The actual reason this was implemented this way is because the Header trait is
overly restrictive and doesn't allow definition with new fields. Even if we
wanted to express the timestamp within the SubstrateHeader, we can't without
replacing Header::new and making a variety of changes to the polkadot-sdk
accordingly. Those aren't worth it at this moment compared to the solution
implemented.
2025-08-30 18:27:09 -04:00
Luke Parker
776e417fd2 Redo primitives, abi
Consolidates all primitives into a single crate. We didn't benefit from its
fragmentation. I'm hesitant to say the new internal-organization is better (it
may be just as clunky), but it's at least in a single crate (not spread out
over micro-crates).

The ABI is the most distinct. We now entirely own it. Block header hashes don't
directly commit to any BABE data (avoiding potentially ~4 KB headers upon
session changes), and are hashed as borsh (a more widely used codec than
SCALE). There are still Substrate variants, using SCALE and with the BABE data,
but they're prunable from a protocol design perspective.

Defines a transaction as a Vec of Calls, allowing atomic operations.
2025-08-30 18:26:37 -04:00
Luke Parker
2f8ce15a92 Update deny, rust-src component 2025-08-30 18:25:02 -04:00
Luke Parker
af56304676 Update the git tags
Does no actual migration work. This allows establishing the difference in
dependencies between substrate and polkadot-sdk/substrate.
2025-08-30 18:23:49 -04:00
Luke Parker
62a2c4f20e Update nightly version 2025-08-30 18:22:48 -04:00
Luke Parker
c69841710a Remove unnecessary to_string for clone 2025-08-30 18:08:08 -04:00
Luke Parker
3158590675 Remove unused patch for parking_lot_core 2025-08-30 16:20:29 -04:00
418 changed files with 15112 additions and 18847 deletions

View File

@@ -12,7 +12,7 @@ runs:
steps: steps:
- name: Bitcoin Daemon Cache - name: Bitcoin Daemon Cache
id: cache-bitcoind id: cache-bitcoind
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: bitcoin.tar.gz path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -52,7 +52,7 @@ runs:
- name: Install solc - name: Install solc
shell: bash shell: bash
run: | run: |
cargo +1.91 install svm-rs --version =0.5.19 cargo +1.91.1 install svm-rs --version =0.5.22
svm install 0.8.29 svm install 0.8.29
svm use 0.8.29 svm use 0.8.29
@@ -75,11 +75,8 @@ runs:
if: runner.os == 'Linux' if: runner.os == 'Linux'
- name: Install rootless Docker - name: Install rootless Docker
uses: docker/setup-docker-action@b60f85385d03ac8acfca6d9996982511d8620a19 uses: docker/setup-docker-action@e61617a16c407a86262fb923c35a616ddbe070b3 # 4.6.0
with: with:
rootless: true rootless: true
set-host: true set-host: true
if: runner.os == 'Linux' if: runner.os == 'Linux'
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.4.3 default: v0.18.4.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Wallet RPC Cache - name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc id: cache-monero-wallet-rpc
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: monero-wallet-rpc path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.4.3 default: v0.18.4.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Daemon Cache - name: Monero Daemon Cache
id: cache-monerod id: cache-monerod
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: /usr/bin/monerod path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,7 +5,7 @@ inputs:
monero-version: monero-version:
description: "Monero version to download and run as a regtest node" description: "Monero version to download and run as a regtest node"
required: false required: false
default: v0.18.4.3 default: v0.18.4.4
bitcoin-version: bitcoin-version:
description: "Bitcoin version to download and run as a regtest node" description: "Bitcoin version to download and run as a regtest node"
@@ -19,9 +19,9 @@ runs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install Foundry - name: Install Foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773 uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with: with:
version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9 version: v1.5.0
cache: false cache: false
- name: Run a Monero Regtest Node - name: Run a Monero Regtest Node

View File

@@ -1 +1 @@
nightly-2025-11-11 nightly-2025-12-01

View File

@@ -17,7 +17,7 @@ jobs:
test-common: test-common:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -31,7 +31,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -19,7 +19,7 @@ jobs:
test-crypto: test-crypto:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -9,16 +9,10 @@ jobs:
name: Run cargo deny name: Run cargo deny
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.91 install cargo-deny --version =0.18.5 run: cargo +1.91.1 install cargo-deny --version =0.18.8
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -13,7 +13,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use - name: Get nightly version to use
id: nightly id: nightly
@@ -43,16 +43,10 @@ jobs:
deny: deny:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.91 install cargo-deny --version =0.18.5 run: cargo +1.91.1 install cargo-deny --version =0.18.8
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph
@@ -60,7 +54,7 @@ jobs:
fmt: fmt:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use - name: Get nightly version to use
id: nightly id: nightly
@@ -73,10 +67,10 @@ jobs:
- name: Run rustfmt - name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install foundry - name: Install Foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773 uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with: with:
version: nightly-41d4e5437107f6f42c7711123890147bc736a609 version: v1.5.0
cache: false cache: false
- name: Run forge fmt - name: Run forge fmt
@@ -85,20 +79,20 @@ jobs:
machete: machete:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify all dependencies are in use - name: Verify all dependencies are in use
run: | run: |
cargo +1.91 install cargo-machete --version =0.9.1 cargo +1.91.1 install cargo-machete --version =0.9.1
cargo +1.91 machete cargo +1.91.1 machete
msrv: msrv:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify claimed `rust-version` - name: Verify claimed `rust-version`
shell: bash shell: bash
run: | run: |
cargo +1.91 install cargo-msrv --version =0.18.4 cargo +1.91.1 install cargo-msrv --version =0.18.4
function check_msrv { function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the # We `cd` into the directory passed as the first argument, but will return to the
@@ -189,16 +183,16 @@ jobs:
slither: slither:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Slither - name: Slither
run: | run: |
python3 -m pip install slither-analyzer python3 -m pip install slither-analyzer==0.11.3
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol slither ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol slither processor/ethereum/erc20/contracts/IERC20.sol

View File

@@ -27,7 +27,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -17,7 +17,7 @@ jobs:
test-common: test-common:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly name: Update nightly
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
with: with:
submodules: "recursive" submodules: "recursive"

View File

@@ -21,7 +21,7 @@ jobs:
test-networks: test-networks:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Test Dependencies - name: Test Dependencies
uses: ./.github/actions/test-dependencies uses: ./.github/actions/test-dependencies

View File

@@ -23,7 +23,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -46,16 +46,16 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Setup Ruby - name: Setup Ruby
uses: ruby/setup-ruby@44511735964dcb71245e7e55f72539531f7bc0eb uses: ruby/setup-ruby@8aeb6ff8030dd539317f8e1769a044873b56ea71 # 1.268.0
with: with:
bundler-cache: true bundler-cache: true
cache-version: 0 cache-version: 0
working-directory: "${{ github.workspace }}/docs" working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages - name: Setup Pages
id: pages id: pages
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # 5.0.0
- name: Build with Jekyll - name: Build with Jekyll
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}" run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env: env:
@@ -74,7 +74,7 @@ jobs:
mv target/doc docs/_site/rust mv target/doc docs/_site/rust
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # 4.0.0
with: with:
path: "docs/_site/" path: "docs/_site/"
@@ -88,4 +88,4 @@ jobs:
steps: steps:
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # 4.0.5

View File

@@ -31,7 +31,7 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -27,10 +27,10 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests - name: Run Reproducible Runtime tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests -- --nocapture

View File

@@ -29,7 +29,7 @@ jobs:
test-infra: test-infra:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -61,6 +61,7 @@ jobs:
-p serai-monero-processor \ -p serai-monero-processor \
-p tendermint-machine \ -p tendermint-machine \
-p tributary-sdk \ -p tributary-sdk \
-p serai-cosign-types \
-p serai-cosign \ -p serai-cosign \
-p serai-coordinator-substrate \ -p serai-coordinator-substrate \
-p serai-coordinator-tributary \ -p serai-coordinator-tributary \
@@ -73,7 +74,7 @@ jobs:
test-substrate: test-substrate:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -82,31 +83,33 @@ jobs:
run: | run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \ -p serai-primitives \
-p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \ -p serai-abi \
-p substrate-median \
-p serai-core-pallet \
-p serai-coins-pallet \
-p serai-validator-sets-pallet \
-p serai-signals-pallet \
-p serai-dex-pallet \
-p serai-genesis-liquidity-pallet \
-p serai-economic-security-pallet \
-p serai-emissions-pallet \
-p serai-in-instructions-pallet \
-p serai-runtime \ -p serai-runtime \
-p serai-node -p serai-node
-p serai-substrate-tests
test-serai-client: test-serai-client:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run Tests - name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-serai
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-bitcoin
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-ethereum
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-monero
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

3
.gitignore vendored
View File

@@ -2,11 +2,10 @@ target
# Don't commit any `Cargo.lock` which aren't the workspace's # Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock Cargo.lock
!./Cargo.lock !/Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't # Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile !orchestration/runtime/Dockerfile
.test-logs .test-logs

2172
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -62,13 +62,14 @@ members = [
"processor/ethereum/primitives", "processor/ethereum/primitives",
"processor/ethereum/test-primitives", "processor/ethereum/test-primitives",
"processor/ethereum/deployer", "processor/ethereum/deployer",
"processor/ethereum/router",
"processor/ethereum/erc20", "processor/ethereum/erc20",
"processor/ethereum/router",
"processor/ethereum", "processor/ethereum",
"processor/monero", "processor/monero",
"coordinator/tributary-sdk/tendermint", "coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk", "coordinator/tributary-sdk",
"coordinator/cosign/types",
"coordinator/cosign", "coordinator/cosign",
"coordinator/substrate", "coordinator/substrate",
"coordinator/tributary", "coordinator/tributary",
@@ -77,34 +78,27 @@ members = [
"coordinator", "coordinator",
"substrate/primitives", "substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi", "substrate/abi",
"substrate/median",
"substrate/core",
"substrate/coins",
"substrate/validator-sets",
"substrate/signals",
"substrate/dex",
"substrate/genesis-liquidity",
"substrate/economic-security",
"substrate/emissions",
"substrate/in-instructions",
"substrate/runtime", "substrate/runtime",
"substrate/node", "substrate/node",
"substrate/client/serai",
"substrate/client/bitcoin",
"substrate/client/ethereum",
"substrate/client/monero",
"substrate/client", "substrate/client",
"orchestration", "orchestration",
@@ -117,10 +111,24 @@ members = [
"tests/message-queue", "tests/message-queue",
# TODO "tests/processor", # TODO "tests/processor",
# TODO "tests/coordinator", # TODO "tests/coordinator",
"tests/substrate",
# TODO "tests/full-stack", # TODO "tests/full-stack",
"tests/reproducible-runtime", "tests/reproducible-runtime",
] ]
[profile.dev]
panic = "abort"
overflow-checks = true
[profile.release]
panic = "abort"
overflow-checks = true
# These do not respect the `panic` configuration value, so we don't provide them
[profile.test]
# panic = "abort" # https://github.com/rust-lang/issues/67650
overflow-checks = true
[profile.bench]
overflow-checks = true
[profile.dev.package] [profile.dev.package]
# Always compile Monero (and a variety of dependencies) with optimizations due # Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs # to the extensive operations required for Bulletproofs
@@ -138,11 +146,14 @@ dalek-ff-group = { opt-level = 3 }
multiexp = { opt-level = 3 } multiexp = { opt-level = 3 }
monero-generators = { opt-level = 3 } monero-io = { opt-level = 3 }
monero-borromean = { opt-level = 3 } monero-primitives = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 } monero-ed25519 = { opt-level = 3 }
monero-mlsag = { opt-level = 3 } monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 } monero-clsag = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs-generators = { opt-level = 3 }
monero-bulletproofs = {opt-level = 3 }
monero-oxide = { opt-level = 3 } monero-oxide = { opt-level = 3 }
# Always compile the eVRF DKG tree with optimizations as well # Always compile the eVRF DKG tree with optimizations as well
@@ -167,18 +178,19 @@ revm-precompile = { opt-level = 3 }
revm-primitives = { opt-level = 3 } revm-primitives = { opt-level = 3 }
revm-state = { opt-level = 3 } revm-state = { opt-level = 3 }
[profile.release]
panic = "unwind"
overflow-checks = true
[patch.crates-io] [patch.crates-io]
# Point to empty crates for unused crates in our tree # Point to empty crates for crates unused within in our tree
alloy-eip2124 = { path = "patches/ethereum/alloy-eip2124" }
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" } ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" } ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" } c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" } fastrlp-3 = { package = "fastrlp", path = "patches/ethereum/fastrlp-0.3" }
fastrlp-4 = { package = "fastrlp", path = "patches/ethereum/fastrlp-0.4" }
primitive-types-12 = { package = "primitive-types", path = "patches/ethereum/primitive-types-0.12" }
rlp = { path = "patches/ethereum/rlp" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-0.30" }
# Dependencies from monero-oxide which originate from within our own tree # Dependencies from monero-oxide which originate from within our own tree, potentially shimmed to account for deviations since publishing
std-shims = { path = "patches/std-shims" } std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" } simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" } multiexp = { path = "crypto/multiexp" }
@@ -188,6 +200,8 @@ dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" } minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" } modular-frost = { path = "crypto/frost" }
# Patch due to `std` now including the required functionality
is_terminal_polyfill = { path = "./patches/is_terminal_polyfill" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition # This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" } home = { path = "patches/home" }
@@ -213,15 +227,12 @@ parity-bip39 = { path = "patches/parity-bip39" }
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" } k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" } p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# `jemalloc` conflicts with `mimalloc`, so patch to a `rocksdb` which never uses `jemalloc`
librocksdb-sys = { path = "patches/librocksdb-sys" }
[workspace.lints.clippy] [workspace.lints.clippy]
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"
unwrap_or_default = "allow" unwrap_or_default = "allow"
map_unwrap_or = "allow" map_unwrap_or = "allow"
needless_continue = "allow" needless_continue = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
borrow_as_ptr = "deny" borrow_as_ptr = "deny"
cast_lossless = "deny" cast_lossless = "deny"
cast_possible_truncation = "deny" cast_possible_truncation = "deny"

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.65" rust-version = "1.77"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
parity-db = { version = "0.5", default-features = false, optional = true } parity-db = { version = "0.5", default-features = false, features = ["arc"], optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true } rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features] [features]

View File

@@ -15,7 +15,7 @@ pub fn serai_db_key(
/// ///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro /// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key, /// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `scale` encodable. The return type is used to auto encode and decode the database /// they must be `borsh` serializable. The return type is used to auto (de)serialize the database
/// value bytes using `borsh`. /// value bytes using `borsh`.
/// ///
/// # Arguments /// # Arguments
@@ -54,11 +54,10 @@ macro_rules! create_db {
)?; )?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? { impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> { pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key( $crate::serai_db_key(
stringify!($db_name).as_bytes(), stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(), stringify!($field_name).as_bytes(),
($($arg),*).encode() &borsh::to_vec(&($($arg),*)).unwrap(),
) )
} }
pub(crate) fn set( pub(crate) fn set(

View File

@@ -6,12 +6,63 @@ pub use std::sync::{Arc, Weak};
mod mutex_shim { mod mutex_shim {
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
pub use spin::{Mutex, MutexGuard}; mod spin_mutex {
use core::ops::{Deref, DerefMut};
// We wrap this in an `Option` so we can consider `None` as poisoned
pub(super) struct Mutex<T>(spin::Mutex<Option<T>>);
/// An acquired view of a `Mutex`.
pub struct MutexGuard<'mutex, T> {
mutex: spin::MutexGuard<'mutex, Option<T>>,
// This is `Some` for the lifetime of this guard, and is only represented as an `Option` due
// to needing to move it on `Drop` (which solely gives us a mutable reference to `self`)
value: Option<T>,
}
impl<T> Mutex<T> {
pub(super) const fn new(value: T) -> Self {
Self(spin::Mutex::new(Some(value)))
}
pub(super) fn lock(&self) -> MutexGuard<'_, T> {
let mut mutex = self.0.lock();
// Take from the `Mutex` so future acquisitions will see `None` unless this is restored
let value = mutex.take();
// Check the prior acquisition did in fact restore the value
if value.is_none() {
panic!("locking a `spin::Mutex` held by a thread which panicked");
}
MutexGuard { mutex, value }
}
}
impl<T> Deref for MutexGuard<'_, T> {
type Target = T;
fn deref(&self) -> &T {
self.value.as_ref().expect("no value yet checked upon lock acquisition")
}
}
impl<T> DerefMut for MutexGuard<'_, T> {
fn deref_mut(&mut self) -> &mut T {
self.value.as_mut().expect("no value yet checked upon lock acquisition")
}
}
impl<'mutex, T> Drop for MutexGuard<'mutex, T> {
fn drop(&mut self) {
// Restore the value
*self.mutex = self.value.take();
}
}
}
#[cfg(not(feature = "std"))]
pub use spin_mutex::*;
#[cfg(feature = "std")] #[cfg(feature = "std")]
pub use std::sync::{Mutex, MutexGuard}; pub use std::sync::{Mutex, MutexGuard};
/// A shimmed `Mutex` with an API mutual to `spin` and `std`. /// A shimmed `Mutex` with an API mutual to `spin` and `std`.
#[derive(Default, Debug)]
pub struct ShimMutex<T>(Mutex<T>); pub struct ShimMutex<T>(Mutex<T>);
impl<T> ShimMutex<T> { impl<T> ShimMutex<T> {
/// Construct a new `Mutex`. /// Construct a new `Mutex`.
@@ -21,8 +72,9 @@ mod mutex_shim {
/// Acquire a lock on the contents of the `Mutex`. /// Acquire a lock on the contents of the `Mutex`.
/// ///
/// On no-`std` environments, this may spin until the lock is acquired. On `std` environments, /// This will panic if the `Mutex` was poisoned.
/// this may panic if the `Mutex` was poisoned. ///
/// On no-`std` environments, the implementation presumably defers to that of a spin lock.
pub fn lock(&self) -> MutexGuard<'_, T> { pub fn lock(&self) -> MutexGuard<'_, T> {
#[cfg(feature = "std")] #[cfg(feature = "std")]
let res = self.0.lock().unwrap(); let res = self.0.lock().unwrap();

View File

@@ -31,7 +31,6 @@ frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" } frost-schnorrkel = { path = "../crypto/schnorrkel" }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
zalloc = { path = "../common/zalloc" } zalloc = { path = "../common/zalloc" }
@@ -43,7 +42,7 @@ messages = { package = "serai-processor-messages", path = "../processor/messages
message-queue = { package = "serai-message-queue", path = "../message-queue" } message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary-sdk = { path = "./tributary-sdk" } tributary-sdk = { path = "./tributary-sdk" }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client-serai = { path = "../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
env_logger = { version = "0.10", default-features = false, features = ["humantime"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] }

View File

@@ -19,11 +19,9 @@ workspace = true
[dependencies] [dependencies]
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
@@ -31,3 +29,5 @@ tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" } serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" } serai-task = { path = "../../common/task", version = "0.1" }
serai-cosign-types = { path = "./types" }

View File

@@ -1,10 +1,21 @@
use core::future::Future; use core::future::Future;
use std::{sync::Arc, collections::HashMap}; use std::{sync::Arc, collections::HashMap};
use serai_client::{ use blake2::{Digest, Blake2b256};
primitives::{SeraiAddress, Amount},
validator_sets::primitives::ExternalValidatorSet, use serai_client_serai::{
Serai, abi::{
primitives::{
network_id::{ExternalNetworkId, NetworkId},
balance::Amount,
crypto::Public,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
merkle::IncrementalUnbalancedMerkleTree,
},
validator_sets::Event,
},
Serai, Events,
}; };
use serai_db::*; use serai_db::*;
@@ -12,9 +23,20 @@ use serai_task::ContinuallyRan;
use crate::*; use crate::*;
#[derive(BorshSerialize, BorshDeserialize)]
struct Set {
session: Session,
key: Public,
stake: Amount,
}
create_db!( create_db!(
CosignIntend { CosignIntend {
ScanCosignFrom: () -> u64, ScanCosignFrom: () -> u64,
BuildsUpon: () -> IncrementalUnbalancedMerkleTree,
Stakes: (network: ExternalNetworkId, validator: SeraiAddress) -> Amount,
Validators: (set: ExternalValidatorSet) -> Vec<SeraiAddress>,
LatestSet: (network: ExternalNetworkId) -> Set,
} }
); );
@@ -35,23 +57,38 @@ db_channel! {
async fn block_has_events_justifying_a_cosign( async fn block_has_events_justifying_a_cosign(
serai: &Serai, serai: &Serai,
block_number: u64, block_number: u64,
) -> Result<(Block, HasEvents), String> { ) -> Result<(Block, Events, HasEvents), String> {
let block = serai let block = serai
.finalized_block_by_number(block_number) .block_by_number(block_number)
.await .await
.map_err(|e| format!("{e:?}"))? .map_err(|e| format!("{e:?}"))?
.ok_or_else(|| "couldn't get block which should've been finalized".to_string())?; .ok_or_else(|| "couldn't get block which should've been finalized".to_string())?;
let serai = serai.as_of(block.hash()); let events = serai.events(block.header.hash()).await.map_err(|e| format!("{e:?}"))?;
if !serai.validator_sets().key_gen_events().await.map_err(|e| format!("{e:?}"))?.is_empty() { if events.validator_sets().set_keys_events().next().is_some() {
return Ok((block, HasEvents::Notable)); return Ok((block, events, HasEvents::Notable));
} }
if !serai.coins().burn_with_instruction_events().await.map_err(|e| format!("{e:?}"))?.is_empty() { if events.coins().burn_with_instruction_events().next().is_some() {
return Ok((block, HasEvents::NonNotable)); return Ok((block, events, HasEvents::NonNotable));
} }
Ok((block, HasEvents::No)) Ok((block, events, HasEvents::No))
}
// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this
// block.
fn cosigning_sets(getter: &impl Get) -> Vec<(ExternalValidatorSet, Public, Amount)> {
let mut sets = vec![];
for network in ExternalNetworkId::all() {
let Some(Set { session, key, stake }) = LatestSet::get(getter, network) else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ExternalValidatorSet { network, session }, key, stake));
}
sets
} }
/// A task to determine which blocks we should intend to cosign. /// A task to determine which blocks we should intend to cosign.
@@ -67,56 +104,108 @@ impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
async move { async move {
let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1); let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1);
let latest_block_number = let latest_block_number =
self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number(); self.serai.latest_finalized_block_number().await.map_err(|e| format!("{e:?}"))?;
for block_number in start_block_number ..= latest_block_number { for block_number in start_block_number ..= latest_block_number {
let mut txn = self.db.txn(); let mut txn = self.db.txn();
let (block, mut has_events) = let (block, events, mut has_events) =
block_has_events_justifying_a_cosign(&self.serai, block_number) block_has_events_justifying_a_cosign(&self.serai, block_number)
.await .await
.map_err(|e| format!("{e:?}"))?; .map_err(|e| format!("{e:?}"))?;
let mut builds_upon =
BuildsUpon::get(&txn).unwrap_or(IncrementalUnbalancedMerkleTree::new());
// Check we are indexing a linear chain // Check we are indexing a linear chain
if (block_number > 1) && if block.header.builds_upon() !=
(<[u8; 32]>::from(block.header.parent_hash) != builds_upon.clone().calculate(serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG)
SubstrateBlockHash::get(&txn, block_number - 1)
.expect("indexing a block but haven't indexed its parent"))
{ {
Err(format!( Err(format!(
"node's block #{block_number} doesn't build upon the block #{} prior indexed", "node's block #{block_number} doesn't build upon the block #{} prior indexed",
block_number - 1 block_number - 1
))?; ))?;
} }
let block_hash = block.hash(); let block_hash = block.header.hash();
SubstrateBlockHash::set(&mut txn, block_number, &block_hash); SubstrateBlockHash::set(&mut txn, block_number, &block_hash);
builds_upon.append(
serai_client_serai::abi::BLOCK_HEADER_BRANCH_TAG,
Blake2b256::new_with_prefix([serai_client_serai::abi::BLOCK_HEADER_LEAF_TAG])
.chain_update(block_hash.0)
.finalize()
.into(),
);
BuildsUpon::set(&mut txn, &builds_upon);
// Update the stakes
for event in events.validator_sets().allocation_events() {
let Event::Allocation { validator, network, amount } = event else {
panic!("event from `allocation_events` wasn't `Event::Allocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 + amount.0));
}
for event in events.validator_sets().deallocation_events() {
let Event::Deallocation { validator, network, amount, timeline: _ } = event else {
panic!("event from `deallocation_events` wasn't `Event::Deallocation`")
};
let Ok(network) = ExternalNetworkId::try_from(*network) else { continue };
let existing = Stakes::get(&txn, network, *validator).unwrap_or(Amount(0));
Stakes::set(&mut txn, network, *validator, &Amount(existing.0 - amount.0));
}
// Handle decided sets
for event in events.validator_sets().set_decided_events() {
let Event::SetDecided { set, validators } = event else {
panic!("event from `set_decided_events` wasn't `Event::SetDecided`")
};
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
Validators::set(
&mut txn,
set,
&validators.iter().map(|(validator, _key_shares)| *validator).collect(),
);
}
// Handle declarations of the latest set
for event in events.validator_sets().set_keys_events() {
let Event::SetKeys { set, key_pair } = event else {
panic!("event from `set_keys_events` wasn't `Event::SetKeys`")
};
let mut stake = 0;
for validator in
Validators::take(&mut txn, *set).expect("set which wasn't decided set keys")
{
stake += Stakes::get(&txn, set.network, validator).unwrap_or(Amount(0)).0;
}
LatestSet::set(
&mut txn,
set.network,
&Set { session: set.session, key: key_pair.0, stake: Amount(stake) },
);
}
let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn); let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn);
// If this is notable, it creates a new global session, which we index into the database // If this is notable, it creates a new global session, which we index into the database
// now // now
if has_events == HasEvents::Notable { if has_events == HasEvents::Notable {
let serai = self.serai.as_of(block_hash); let sets_and_keys_and_stakes = cosigning_sets(&txn);
let sets_and_keys = cosigning_sets(&serai).await?; let global_session = GlobalSession::id(
let global_session = sets_and_keys_and_stakes.iter().map(|(set, _key, _stake)| *set).collect(),
GlobalSession::id(sets_and_keys.iter().map(|(set, _key)| *set).collect()); );
let mut sets = Vec::with_capacity(sets_and_keys.len()); let mut sets = Vec::with_capacity(sets_and_keys_and_stakes.len());
let mut keys = HashMap::with_capacity(sets_and_keys.len()); let mut keys = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut stakes = HashMap::with_capacity(sets_and_keys.len()); let mut stakes = HashMap::with_capacity(sets_and_keys_and_stakes.len());
let mut total_stake = 0; let mut total_stake = 0;
for (set, key) in &sets_and_keys { for (set, key, stake) in sets_and_keys_and_stakes {
sets.push(*set); sets.push(set);
keys.insert(set.network, SeraiAddress::from(*key)); keys.insert(set.network, key);
let stake = serai stakes.insert(set.network, stake.0);
.validator_sets() total_stake += stake.0;
.total_allocated_stake(set.network.into())
.await
.map_err(|e| format!("{e:?}"))?
.unwrap_or(Amount(0))
.0;
stakes.insert(set.network, stake);
total_stake += stake;
} }
if total_stake == 0 { if total_stake == 0 {
Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?; Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?;

View File

@@ -7,18 +7,27 @@ use std::{sync::Arc, collections::HashMap, time::Instant};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{ use serai_client_serai::{
primitives::{ExternalNetworkId, SeraiAddress}, abi::{
validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair}, primitives::{
Public, Block, Serai, TemporalSerai, BlockHash,
crypto::{Public, KeyPair},
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
address::SeraiAddress,
},
Block,
},
Serai, State,
}; };
use serai_db::*; use serai_db::*;
use serai_task::*; use serai_task::*;
pub use serai_cosign_types::*;
/// The cosigns which are intended to be performed. /// The cosigns which are intended to be performed.
mod intend; mod intend;
/// The evaluator of the cosigns. /// The evaluator of the cosigns.
@@ -28,9 +37,6 @@ mod delay;
pub use delay::BROADCAST_FREQUENCY; pub use delay::BROADCAST_FREQUENCY;
use delay::LatestCosignedBlockNumber; use delay::LatestCosignedBlockNumber;
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// A 'global session', defined as all validator sets used for cosigning at a given moment. /// A 'global session', defined as all validator sets used for cosigning at a given moment.
/// ///
/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign /// We evaluate cosign faults within a global session. This ensures even if cosigners cosign
@@ -53,7 +59,7 @@ pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
pub(crate) struct GlobalSession { pub(crate) struct GlobalSession {
pub(crate) start_block_number: u64, pub(crate) start_block_number: u64,
pub(crate) sets: Vec<ExternalValidatorSet>, pub(crate) sets: Vec<ExternalValidatorSet>,
pub(crate) keys: HashMap<ExternalNetworkId, SeraiAddress>, pub(crate) keys: HashMap<ExternalNetworkId, Public>,
pub(crate) stakes: HashMap<ExternalNetworkId, u64>, pub(crate) stakes: HashMap<ExternalNetworkId, u64>,
pub(crate) total_stake: u64, pub(crate) total_stake: u64,
} }
@@ -78,74 +84,12 @@ enum HasEvents {
No, No,
} }
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
self.encode()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
fn verify_signature(&self, signer: serai_client::Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}
create_db! { create_db! {
Cosign { Cosign {
// The following are populated by the intend task and used throughout the library // The following are populated by the intend task and used throughout the library
// An index of Substrate blocks // An index of Substrate blocks
SubstrateBlockHash: (block_number: u64) -> [u8; 32], SubstrateBlockHash: (block_number: u64) -> BlockHash,
// A mapping from a global session's ID to its relevant information. // A mapping from a global session's ID to its relevant information.
GlobalSessions: (global_session: [u8; 32]) -> GlobalSession, GlobalSessions: (global_session: [u8; 32]) -> GlobalSession,
// The last block to be cosigned by a global session. // The last block to be cosigned by a global session.
@@ -177,60 +121,6 @@ create_db! {
} }
} }
/// Fetch the keys used for cosigning by a specific network.
async fn keys_for_network(
serai: &TemporalSerai<'_>,
network: ExternalNetworkId,
) -> Result<Option<(Session, KeyPair)>, String> {
let Some(latest_session) =
serai.validator_sets().session(network.into()).await.map_err(|e| format!("{e:?}"))?
else {
// If this network hasn't had a session declared, move on
return Ok(None);
};
// Get the keys for the latest session
if let Some(keys) = serai
.validator_sets()
.keys(ExternalValidatorSet { network, session: latest_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((latest_session, keys)));
}
// If the latest session has yet to set keys, use the prior session
if let Some(prior_session) = latest_session.0.checked_sub(1).map(Session) {
if let Some(keys) = serai
.validator_sets()
.keys(ExternalValidatorSet { network, session: prior_session })
.await
.map_err(|e| format!("{e:?}"))?
{
return Ok(Some((prior_session, keys)));
}
}
Ok(None)
}
/// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this
/// block.
async fn cosigning_sets(
serai: &TemporalSerai<'_>,
) -> Result<Vec<(ExternalValidatorSet, Public)>, String> {
let mut sets = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len());
for network in serai_client::primitives::EXTERNAL_NETWORKS {
let Some((session, keys)) = keys_for_network(serai, network).await? else {
// If this network doesn't have usable keys, move on
continue;
};
sets.push((ExternalValidatorSet { network, session }, keys.0));
}
Ok(sets)
}
/// An object usable to request notable cosigns for a block. /// An object usable to request notable cosigns for a block.
pub trait RequestNotableCosigns: 'static + Send { pub trait RequestNotableCosigns: 'static + Send {
/// The error type which may be encountered when requesting notable cosigns. /// The error type which may be encountered when requesting notable cosigns.
@@ -331,7 +221,10 @@ impl<D: Db> Cosigning<D> {
} }
/// Fetch a cosigned Substrate block's hash by its block number. /// Fetch a cosigned Substrate block's hash by its block number.
pub fn cosigned_block(getter: &impl Get, block_number: u64) -> Result<Option<[u8; 32]>, Faulted> { pub fn cosigned_block(
getter: &impl Get,
block_number: u64,
) -> Result<Option<BlockHash>, Faulted> {
if block_number > Self::latest_cosigned_block_number(getter)? { if block_number > Self::latest_cosigned_block_number(getter)? {
return Ok(None); return Ok(None);
} }
@@ -346,8 +239,8 @@ impl<D: Db> Cosigning<D> {
/// If this global session hasn't produced any notable cosigns, this will return the latest /// If this global session hasn't produced any notable cosigns, this will return the latest
/// cosigns for this session. /// cosigns for this session.
pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec<SignedCosign> { pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec<SignedCosign> {
let mut cosigns = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len()); let mut cosigns = vec![];
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) { if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) {
cosigns.push(cosign); cosigns.push(cosign);
} }
@@ -364,7 +257,7 @@ impl<D: Db> Cosigning<D> {
let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults"); let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults");
// Also include all of our recognized-as-honest cosigns in an attempt to induce fault // Also include all of our recognized-as-honest cosigns in an attempt to induce fault
// identification in those who see the faulty cosigns as honest // identification in those who see the faulty cosigns as honest
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) { if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) {
if cosign.cosign.global_session == faulted { if cosign.cosign.global_session == faulted {
cosigns.push(cosign); cosigns.push(cosign);
@@ -376,8 +269,8 @@ impl<D: Db> Cosigning<D> {
let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else { let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else {
return vec![]; return vec![];
}; };
let mut cosigns = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len()); let mut cosigns = vec![];
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) { if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) {
cosigns.push(cosign); cosigns.push(cosign);
} }
@@ -432,13 +325,8 @@ impl<D: Db> Cosigning<D> {
// Check the cosign's signature // Check the cosign's signature
{ {
let key = Public::from({ let key =
let Some(key) = global_session.keys.get(&network) else { *global_session.keys.get(&network).ok_or(IntakeCosignError::NonParticipatingNetwork)?;
Err(IntakeCosignError::NonParticipatingNetwork)?
};
*key
});
if !signed_cosign.verify_signature(key) { if !signed_cosign.verify_signature(key) {
Err(IntakeCosignError::InvalidSignature)?; Err(IntakeCosignError::InvalidSignature)?;
} }

View File

@@ -0,0 +1,25 @@
[package]
name = "serai-cosign-types"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] }

View File

@@ -0,0 +1,72 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![deny(missing_docs)]
//! Types used when cosigning Serai. For more info, please see `serai-cosign`.
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{BlockHash, crypto::Public, network_id::ExternalNetworkId};
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: BlockHash,
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
borsh::to_vec(self).unwrap()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
/// Verify a cosign's signature.
pub fn verify_signature(&self, signer: Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}

View File

@@ -22,7 +22,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive",
serai-db = { path = "../../common/db", version = "0.1" } serai-db = { path = "../../common/db", version = "0.1" }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] }
serai-cosign = { path = "../cosign" } serai-cosign = { path = "../cosign" }
tributary-sdk = { path = "../tributary-sdk" } tributary-sdk = { path = "../tributary-sdk" }

View File

@@ -29,7 +29,7 @@ schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client-serai = { path = "../../../substrate/client/serai", default-features = false }
serai-cosign = { path = "../../cosign" } serai-cosign = { path = "../../cosign" }
tributary-sdk = { path = "../../tributary-sdk" } tributary-sdk = { path = "../../tributary-sdk" }

View File

@@ -7,7 +7,7 @@ use rand_core::{RngCore, OsRng};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use schnorrkel::{Keypair, PublicKey, Signature}; use schnorrkel::{Keypair, PublicKey, Signature};
use serai_client::primitives::PublicKey as Public; use serai_client_serai::abi::primitives::crypto::Public;
use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use libp2p::{ use libp2p::{
@@ -104,7 +104,7 @@ impl OnlyValidators {
.verify_simple(PROTOCOL.as_bytes(), &msg, &sig) .verify_simple(PROTOCOL.as_bytes(), &msg, &sig)
.map_err(|_| io::Error::other("invalid signature"))?; .map_err(|_| io::Error::other("invalid signature"))?;
Ok(peer_id_from_public(Public::from_raw(public_key.to_bytes()))) Ok(peer_id_from_public(Public(public_key.to_bytes())))
} }
} }

View File

@@ -1,11 +1,11 @@
use core::future::Future; use core::{future::Future, str::FromStr};
use std::{sync::Arc, collections::HashSet}; use std::{sync::Arc, collections::HashSet};
use rand_core::{RngCore, OsRng}; use rand_core::{RngCore, OsRng};
use tokio::sync::mpsc; use tokio::sync::mpsc;
use serai_client::{SeraiError, Serai}; use serai_client_serai::{RpcError, Serai};
use libp2p::{ use libp2p::{
core::multiaddr::{Protocol, Multiaddr}, core::multiaddr::{Protocol, Multiaddr},
@@ -50,7 +50,7 @@ impl ContinuallyRan for DialTask {
const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60; const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60;
type Error = SeraiError; type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> { fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move { async move {
@@ -94,6 +94,13 @@ impl ContinuallyRan for DialTask {
usize::try_from(OsRng.next_u64() % u64::try_from(potential_peers.len()).unwrap()) usize::try_from(OsRng.next_u64() % u64::try_from(potential_peers.len()).unwrap())
.unwrap(); .unwrap();
let randomly_selected_peer = potential_peers.swap_remove(index_to_dial); let randomly_selected_peer = potential_peers.swap_remove(index_to_dial);
let Ok(randomly_selected_peer) = libp2p::Multiaddr::from_str(&randomly_selected_peer)
else {
log::error!(
"peer from substrate wasn't a valid `Multiaddr`: {randomly_selected_peer}"
);
continue;
};
log::info!("found peer from substrate: {randomly_selected_peer}"); log::info!("found peer from substrate: {randomly_selected_peer}");

View File

@@ -13,9 +13,10 @@ use rand_core::{RngCore, OsRng};
use zeroize::Zeroizing; use zeroize::Zeroizing;
use schnorrkel::Keypair; use schnorrkel::Keypair;
use serai_client::{ use serai_client_serai::{
primitives::{ExternalNetworkId, PublicKey}, abi::primitives::{
validator_sets::primitives::ExternalValidatorSet, crypto::Public, network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet,
},
Serai, Serai,
}; };
@@ -66,7 +67,7 @@ use dial::DialTask;
const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o')
fn peer_id_from_public(public: PublicKey) -> PeerId { fn peer_id_from_public(public: Public) -> PeerId {
// 0 represents the identity Multihash, that no hash was performed // 0 represents the identity Multihash, that no hash was performed
// It's an internal constant so we can't refer to the constant inside libp2p // It's an internal constant so we can't refer to the constant inside libp2p
PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap() PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap()

View File

@@ -6,7 +6,7 @@ use std::{
use borsh::BorshDeserialize; use borsh::BorshDeserialize;
use serai_client::validator_sets::primitives::ExternalValidatorSet; use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use tokio::sync::{mpsc, oneshot, RwLock}; use tokio::sync::{mpsc, oneshot, RwLock};

View File

@@ -4,9 +4,8 @@ use std::{
collections::{HashSet, HashMap}, collections::{HashSet, HashMap},
}; };
use serai_client::{ use serai_client_serai::abi::primitives::{network_id::ExternalNetworkId, validator_sets::Session};
primitives::ExternalNetworkId, validator_sets::primitives::Session, SeraiError, Serai, use serai_client_serai::{RpcError, Serai};
};
use serai_task::{Task, ContinuallyRan}; use serai_task::{Task, ContinuallyRan};
@@ -52,7 +51,7 @@ impl Validators {
async fn session_changes( async fn session_changes(
serai: impl Borrow<Serai>, serai: impl Borrow<Serai>,
sessions: impl Borrow<HashMap<ExternalNetworkId, Session>>, sessions: impl Borrow<HashMap<ExternalNetworkId, Session>>,
) -> Result<Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>, SeraiError> { ) -> Result<Vec<(ExternalNetworkId, Session, HashSet<PeerId>)>, RpcError> {
/* /*
This uses the latest finalized block, not the latest cosigned block, which should be fine as This uses the latest finalized block, not the latest cosigned block, which should be fine as
in the worst case, we'd connect to unexpected validators. They still shouldn't be able to in the worst case, we'd connect to unexpected validators. They still shouldn't be able to
@@ -61,18 +60,18 @@ impl Validators {
Besides, we can't connect to historical validators, only the current validators. Besides, we can't connect to historical validators, only the current validators.
*/ */
let temporal_serai = serai.borrow().as_of_latest_finalized_block().await?; let serai = serai.borrow().state().await?;
let temporal_serai = temporal_serai.validator_sets();
let mut session_changes = vec![]; let mut session_changes = vec![];
{ {
// FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but // FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but
// we poll it till it yields all futures with the most minimal processing possible // we poll it till it yields all futures with the most minimal processing possible
let mut futures = FuturesUnordered::new(); let mut futures = FuturesUnordered::new();
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
let sessions = sessions.borrow(); let sessions = sessions.borrow();
let serai = serai.borrow();
futures.push(async move { futures.push(async move {
let session = match temporal_serai.session(network.into()).await { let session = match serai.current_session(network.into()).await {
Ok(Some(session)) => session, Ok(Some(session)) => session,
Ok(None) => return Ok(None), Ok(None) => return Ok(None),
Err(e) => return Err(e), Err(e) => return Err(e),
@@ -81,12 +80,16 @@ impl Validators {
if sessions.get(&network) == Some(&session) { if sessions.get(&network) == Some(&session) {
Ok(None) Ok(None)
} else { } else {
match temporal_serai.active_network_validators(network.into()).await { match serai.current_validators(network.into()).await {
Ok(validators) => Ok(Some(( Ok(Some(validators)) => Ok(Some((
network, network,
session, session,
validators.into_iter().map(peer_id_from_public).collect(), validators
.into_iter()
.map(|validator| peer_id_from_public(validator.into()))
.collect(),
))), ))),
Ok(None) => panic!("network has session yet no validators"),
Err(e) => Err(e), Err(e) => Err(e),
} }
} }
@@ -153,7 +156,7 @@ impl Validators {
} }
/// Update the view of the validators. /// Update the view of the validators.
pub(crate) async fn update(&mut self) -> Result<(), SeraiError> { pub(crate) async fn update(&mut self) -> Result<(), RpcError> {
let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?; let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?;
self.incorporate_session_changes(session_changes); self.incorporate_session_changes(session_changes);
Ok(()) Ok(())
@@ -206,7 +209,7 @@ impl ContinuallyRan for UpdateValidatorsTask {
const DELAY_BETWEEN_ITERATIONS: u64 = 60; const DELAY_BETWEEN_ITERATIONS: u64 = 60;
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60;
type Error = SeraiError; type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> { fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move { async move {

View File

@@ -1,7 +1,7 @@
use core::future::Future; use core::future::Future;
use std::time::{Duration, SystemTime}; use std::time::{Duration, SystemTime};
use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet}; use serai_primitives::validator_sets::{ExternalValidatorSet, KeyShares};
use futures_lite::FutureExt; use futures_lite::FutureExt;
@@ -30,7 +30,7 @@ pub const MIN_BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1;
/// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators, /// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators,
/// and aggregate signature). Accordingly, this should be a safe over-estimate. /// and aggregate signature). Accordingly, this should be a safe over-estimate.
pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH * pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH *
(tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((MAX_KEY_SHARES_PER_SET as usize) * 128)); (tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((KeyShares::MAX_PER_SET as usize) * 128));
/// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's /// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's
/// tip. /// tip.

View File

@@ -7,7 +7,7 @@ use std::collections::HashMap;
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet}; use serai_primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet};
use serai_db::Db; use serai_db::Db;
use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader}; use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader};

View File

@@ -5,9 +5,10 @@ use serai_db::{create_db, db_channel};
use dkg::Participant; use dkg::Participant;
use serai_client::{ use serai_client_serai::abi::primitives::{
primitives::ExternalNetworkId, crypto::KeyPair,
validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair}, network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
}; };
use serai_cosign::SignedCosign; use serai_cosign::SignedCosign;

View File

@@ -12,10 +12,8 @@ use frost_schnorrkel::{
use serai_db::{DbTxn, Db as DbTrait}; use serai_db::{DbTxn, Db as DbTrait};
use serai_client::{ #[rustfmt::skip]
primitives::SeraiAddress, use serai_client_serai::abi::primitives::{validator_sets::ExternalValidatorSet, address::SeraiAddress};
validator_sets::primitives::{ExternalValidatorSet, musig_context, set_keys_message},
};
use serai_task::{DoesNotError, ContinuallyRan}; use serai_task::{DoesNotError, ContinuallyRan};
@@ -160,7 +158,7 @@ impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
let (machine, preprocess) = AlgorithmMachine::new( let (machine, preprocess) = AlgorithmMachine::new(
schnorrkel(), schnorrkel(),
// We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet // We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet
musig(musig_context(set.into()), key, &[public_key]).unwrap(), musig(ExternalValidatorSet::musig_context(&set), key, &[public_key]).unwrap(),
) )
.preprocess(&mut OsRng); .preprocess(&mut OsRng);
// We take the preprocess so we can use it in a distinct machine with the actual Musig // We take the preprocess so we can use it in a distinct machine with the actual Musig
@@ -260,9 +258,12 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let keys = let keys = musig(
musig(musig_context(self.set.set.into()), self.key.clone(), &musig_public_keys) ExternalValidatorSet::musig_context(&self.set.set),
.unwrap(); self.key.clone(),
&musig_public_keys,
)
.unwrap();
// Rebuild the machine // Rebuild the machine
let (machine, preprocess_from_cache) = let (machine, preprocess_from_cache) =
@@ -296,9 +297,10 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
}; };
// Calculate our share // Calculate our share
let (machine, share) = match handle_frost_error( let (machine, share) = match handle_frost_error(machine.sign(
machine.sign(preprocesses, &set_keys_message(&self.set.set, &key_pair)), preprocesses,
) { &ExternalValidatorSet::set_keys_message(&self.set.set, &key_pair),
)) {
Ok((machine, share)) => (machine, share), Ok((machine, share)) => (machine, share),
// This yields the *musig participant index* // This yields the *musig participant index*
Err(participant) => { Err(participant) => {

View File

@@ -14,9 +14,14 @@ use borsh::BorshDeserialize;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use serai_client::{ use serai_client_serai::{
primitives::{ExternalNetworkId, PublicKey, SeraiAddress, Signature}, abi::primitives::{
validator_sets::primitives::{ExternalValidatorSet, KeyPair}, BlockHash,
crypto::{Public, Signature, ExternalKey, KeyPair},
network_id::ExternalNetworkId,
validator_sets::ExternalValidatorSet,
address::SeraiAddress,
},
Serai, Serai,
}; };
use message_queue::{Service, client::MessageQueue}; use message_queue::{Service, client::MessageQueue};
@@ -61,9 +66,7 @@ async fn serai() -> Arc<Serai> {
let Ok(serai) = Serai::new(format!( let Ok(serai) = Serai::new(format!(
"http://{}:9944", "http://{}:9944",
serai_env::var("SERAI_HOSTNAME").expect("Serai hostname wasn't provided") serai_env::var("SERAI_HOSTNAME").expect("Serai hostname wasn't provided")
)) )) else {
.await
else {
log::error!("couldn't connect to the Serai node"); log::error!("couldn't connect to the Serai node");
tokio::time::sleep(delay).await; tokio::time::sleep(delay).await;
delay = (delay + SERAI_CONNECTION_DELAY).min(MAX_SERAI_CONNECTION_DELAY); delay = (delay + SERAI_CONNECTION_DELAY).min(MAX_SERAI_CONNECTION_DELAY);
@@ -213,10 +216,12 @@ async fn handle_network(
&mut txn, &mut txn,
ExternalValidatorSet { network, session }, ExternalValidatorSet { network, session },
&KeyPair( &KeyPair(
PublicKey::from_raw(substrate_key), Public(substrate_key),
network_key ExternalKey(
.try_into() network_key
.expect("generated a network key which exceeds the maximum key length"), .try_into()
.expect("generated a network key which exceeds the maximum key length"),
),
), ),
); );
} }
@@ -284,12 +289,13 @@ async fn handle_network(
&mut txn, &mut txn,
ExternalValidatorSet { network, session }, ExternalValidatorSet { network, session },
slash_report, slash_report,
Signature::from(signature), Signature(signature),
); );
} }
}, },
messages::ProcessorMessage::Substrate(msg) => match msg { messages::ProcessorMessage::Substrate(msg) => match msg {
messages::substrate::ProcessorMessage::SubstrateBlockAck { block, plans } => { messages::substrate::ProcessorMessage::SubstrateBlockAck { block, plans } => {
let block = BlockHash(block);
let mut by_session = HashMap::new(); let mut by_session = HashMap::new();
for plan in plans { for plan in plans {
by_session by_session
@@ -481,7 +487,7 @@ async fn main() {
); );
// Handle each of the networks // Handle each of the networks
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network)); tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network));
} }

View File

@@ -10,7 +10,10 @@ use tokio::sync::mpsc;
use serai_db::{DbTxn, Db as DbTrait}; use serai_db::{DbTxn, Db as DbTrait};
use serai_client::validator_sets::primitives::{Session, ExternalValidatorSet}; use serai_client_serai::abi::primitives::{
network_id::ExternalNetworkId,
validator_sets::{Session, ExternalValidatorSet},
};
use message_queue::{Service, Metadata, client::MessageQueue}; use message_queue::{Service, Metadata, client::MessageQueue};
use tributary_sdk::Tributary; use tributary_sdk::Tributary;
@@ -39,7 +42,7 @@ impl<P: P2p> ContinuallyRan for SubstrateTask<P> {
let mut made_progress = false; let mut made_progress = false;
// Handle the Canonical events // Handle the Canonical events
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
loop { loop {
let mut txn = self.db.txn(); let mut txn = self.db.txn();
let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network) let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network)

View File

@@ -11,8 +11,7 @@ use tokio::sync::mpsc;
use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel};
use scale::Encode; use serai_client_serai::abi::primitives::validator_sets::ExternalValidatorSet;
use serai_client::validator_sets::primitives::ExternalValidatorSet;
use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary};
@@ -479,7 +478,8 @@ pub(crate) async fn spawn_tributary<P: P2p>(
return; return;
} }
let genesis = <[u8; 32]>::from(Blake2s::<U32>::digest((set.serai_block, set.set).encode())); let genesis =
<[u8; 32]>::from(Blake2s::<U32>::digest(borsh::to_vec(&(set.serai_block, set.set)).unwrap()));
// Since the Serai block will be finalized, then cosigned, before we handle this, this time will // Since the Serai block will be finalized, then cosigned, before we handle this, this time will
// be a couple of minutes stale. While the Tributary will still function with a start time in the // be a couple of minutes stale. While the Tributary will still function with a start time in the

View File

@@ -20,17 +20,15 @@ workspace = true
[dependencies] [dependencies]
bitvec = { version = "1", default-features = false, features = ["std"] } bitvec = { version = "1", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] } serai-client-serai = { path = "../../substrate/client/serai", default-features = false }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
futures = { version = "0.3", default-features = false, features = ["std"] } futures = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" } serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" } serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -3,7 +3,13 @@ use std::sync::Arc;
use futures::stream::{StreamExt, FuturesOrdered}; use futures::stream::{StreamExt, FuturesOrdered};
use serai_client::{validator_sets::primitives::ExternalValidatorSet, Serai}; use serai_client_serai::{
abi::{
self,
primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet},
},
Serai,
};
use messages::substrate::{InInstructionResult, ExecutedBatch, CoordinatorMessage}; use messages::substrate::{InInstructionResult, ExecutedBatch, CoordinatorMessage};
@@ -15,6 +21,7 @@ use serai_cosign::Cosigning;
create_db!( create_db!(
CoordinatorSubstrateCanonical { CoordinatorSubstrateCanonical {
NextBlock: () -> u64, NextBlock: () -> u64,
LastIndexedBatchId: (network: ExternalNetworkId) -> u32,
} }
); );
@@ -45,10 +52,10 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
// These are all the events which generate canonical messages // These are all the events which generate canonical messages
struct CanonicalEvents { struct CanonicalEvents {
time: u64, time: u64,
key_gen_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>, set_keys_events: Vec<abi::validator_sets::Event>,
set_retired_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>, slash_report_events: Vec<abi::validator_sets::Event>,
batch_events: Vec<serai_client::in_instructions::InInstructionsEvent>, batch_events: Vec<abi::in_instructions::Event>,
burn_events: Vec<serai_client::coins::CoinsEvent>, burn_events: Vec<abi::coins::Event>,
} }
// For a cosigned block, fetch all relevant events // For a cosigned block, fetch all relevant events
@@ -66,40 +73,24 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
} }
Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()), Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()),
}; };
let temporal_serai = serai.as_of(block_hash); let events = serai.events(block_hash).await.map_err(|e| format!("{e}"))?;
let temporal_serai_validators = temporal_serai.validator_sets(); let set_keys_events = events.validator_sets().set_keys_events().cloned().collect();
let temporal_serai_instructions = temporal_serai.in_instructions(); let slash_report_events =
let temporal_serai_coins = temporal_serai.coins(); events.validator_sets().slash_report_events().cloned().collect();
let batch_events = events.in_instructions().batch_events().cloned().collect();
let (block, key_gen_events, set_retired_events, batch_events, burn_events) = let burn_events = events.coins().burn_with_instruction_events().cloned().collect();
tokio::try_join!( let Some(block) = serai.block(block_hash).await.map_err(|e| format!("{e:?}"))? else {
serai.block(block_hash),
temporal_serai_validators.key_gen_events(),
temporal_serai_validators.set_retired_events(),
temporal_serai_instructions.batch_events(),
temporal_serai_coins.burn_with_instruction_events(),
)
.map_err(|e| format!("{e:?}"))?;
let Some(block) = block else {
Err(format!("Serai node didn't have cosigned block #{block_number}"))? Err(format!("Serai node didn't have cosigned block #{block_number}"))?
}; };
let time = if block_number == 0 { // We use time in seconds, not milliseconds, here
block.time().unwrap_or(0) let time = block.header.unix_time_in_millis() / 1000;
} else {
// Serai's block time is in milliseconds
block
.time()
.ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? /
1000
};
Ok(( Ok((
block_number, block_number,
CanonicalEvents { CanonicalEvents {
time, time,
key_gen_events, set_keys_events,
set_retired_events, slash_report_events,
batch_events, batch_events,
burn_events, burn_events,
}, },
@@ -131,10 +122,9 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
let mut txn = self.db.txn(); let mut txn = self.db.txn();
for key_gen in block.key_gen_events { for set_keys in block.set_keys_events {
let serai_client::validator_sets::ValidatorSetsEvent::KeyGen { set, key_pair } = &key_gen let abi::validator_sets::Event::SetKeys { set, key_pair } = &set_keys else {
else { panic!("`SetKeys` event wasn't a `SetKeys` event: {set_keys:?}");
panic!("KeyGen event wasn't a KeyGen event: {key_gen:?}");
}; };
crate::Canonical::send( crate::Canonical::send(
&mut txn, &mut txn,
@@ -147,12 +137,10 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
); );
} }
for set_retired in block.set_retired_events { for slash_report in block.slash_report_events {
let serai_client::validator_sets::ValidatorSetsEvent::SetRetired { set } = &set_retired let abi::validator_sets::Event::SlashReport { set } = &slash_report else {
else { panic!("`SlashReport` event wasn't a `SlashReport` event: {slash_report:?}");
panic!("SetRetired event wasn't a SetRetired event: {set_retired:?}");
}; };
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
crate::Canonical::send( crate::Canonical::send(
&mut txn, &mut txn,
set.network, set.network,
@@ -160,10 +148,12 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
); );
} }
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
let mut batch = None; let mut batch = None;
for this_batch in &block.batch_events { for this_batch in &block.batch_events {
let serai_client::in_instructions::InInstructionsEvent::Batch { // Only irrefutable as this is the only member of the enum at this time
#[expect(irrefutable_let_patterns)]
let abi::in_instructions::Event::Batch {
network: batch_network, network: batch_network,
publishing_session, publishing_session,
id, id,
@@ -194,14 +184,19 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
}) })
.collect(), .collect(),
}); });
if LastIndexedBatchId::get(&txn, network) != id.checked_sub(1) {
panic!(
"next batch from Serai's ID was not an increment of the last indexed batch's ID"
);
}
LastIndexedBatchId::set(&mut txn, network, id);
} }
} }
let mut burns = vec![]; let mut burns = vec![];
for burn in &block.burn_events { for burn in &block.burn_events {
let serai_client::coins::CoinsEvent::BurnWithInstruction { from: _, instruction } = let abi::coins::Event::BurnWithInstruction { from: _, instruction } = &burn else {
&burn
else {
panic!("BurnWithInstruction event wasn't a BurnWithInstruction event: {burn:?}"); panic!("BurnWithInstruction event wasn't a BurnWithInstruction event: {burn:?}");
}; };
if instruction.balance.coin.network() == network { if instruction.balance.coin.network() == network {
@@ -223,3 +218,7 @@ impl<D: Db> ContinuallyRan for CanonicalEventStream<D> {
} }
} }
} }
pub(crate) fn last_indexed_batch_id(txn: &impl DbTxn, network: ExternalNetworkId) -> Option<u32> {
LastIndexedBatchId::get(txn, network)
}

View File

@@ -3,9 +3,14 @@ use std::sync::Arc;
use futures::stream::{StreamExt, FuturesOrdered}; use futures::stream::{StreamExt, FuturesOrdered};
use serai_client::{ use serai_client_serai::{
primitives::{SeraiAddress, EmbeddedEllipticCurve}, abi::primitives::{
validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet}, BlockHash,
crypto::EmbeddedEllipticCurveKeys as EmbeddedEllipticCurveKeysStruct,
network_id::ExternalNetworkId,
validator_sets::{KeyShares, ExternalValidatorSet},
address::SeraiAddress,
},
Serai, Serai,
}; };
@@ -19,6 +24,10 @@ use crate::NewSetInformation;
create_db!( create_db!(
CoordinatorSubstrateEphemeral { CoordinatorSubstrateEphemeral {
NextBlock: () -> u64, NextBlock: () -> u64,
EmbeddedEllipticCurveKeys: (
network: ExternalNetworkId,
validator: SeraiAddress
) -> EmbeddedEllipticCurveKeysStruct,
} }
); );
@@ -49,10 +58,11 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
// These are all the events which generate canonical messages // These are all the events which generate canonical messages
struct EphemeralEvents { struct EphemeralEvents {
block_hash: [u8; 32], block_hash: BlockHash,
time: u64, time: u64,
new_set_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>, embedded_elliptic_curve_keys_events: Vec<serai_client_serai::abi::validator_sets::Event>,
accepted_handover_events: Vec<serai_client::validator_sets::ValidatorSetsEvent>, set_decided_events: Vec<serai_client_serai::abi::validator_sets::Event>,
accepted_handover_events: Vec<serai_client_serai::abi::validator_sets::Event>,
} }
// For a cosigned block, fetch all relevant events // For a cosigned block, fetch all relevant events
@@ -71,31 +81,31 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()), Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()),
}; };
let temporal_serai = serai.as_of(block_hash); let events = serai.events(block_hash).await.map_err(|e| format!("{e}"))?;
let temporal_serai_validators = temporal_serai.validator_sets(); let embedded_elliptic_curve_keys_events = events
let (block, new_set_events, accepted_handover_events) = tokio::try_join!( .validator_sets()
serai.block(block_hash), .set_embedded_elliptic_curve_keys_events()
temporal_serai_validators.new_set_events(), .cloned()
temporal_serai_validators.accepted_handover_events(), .collect::<Vec<_>>();
) let set_decided_events =
.map_err(|e| format!("{e:?}"))?; events.validator_sets().set_decided_events().cloned().collect::<Vec<_>>();
let Some(block) = block else { let accepted_handover_events =
events.validator_sets().accepted_handover_events().cloned().collect::<Vec<_>>();
let Some(block) = serai.block(block_hash).await.map_err(|e| format!("{e:?}"))? else {
Err(format!("Serai node didn't have cosigned block #{block_number}"))? Err(format!("Serai node didn't have cosigned block #{block_number}"))?
}; };
let time = if block_number == 0 { // We use time in seconds, not milliseconds, here
block.time().unwrap_or(0) let time = block.header.unix_time_in_millis() / 1000;
} else {
// Serai's block time is in milliseconds
block
.time()
.ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? /
1000
};
Ok(( Ok((
block_number, block_number,
EphemeralEvents { block_hash, time, new_set_events, accepted_handover_events }, EphemeralEvents {
block_hash,
time,
embedded_elliptic_curve_keys_events,
set_decided_events,
accepted_handover_events,
},
)) ))
} }
} }
@@ -126,105 +136,82 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
let mut txn = self.db.txn(); let mut txn = self.db.txn();
for new_set in block.new_set_events { for event in block.embedded_elliptic_curve_keys_events {
let serai_client::validator_sets::ValidatorSetsEvent::NewSet { set } = &new_set else { let serai_client_serai::abi::validator_sets::Event::SetEmbeddedEllipticCurveKeys {
panic!("NewSet event wasn't a NewSet event: {new_set:?}"); validator,
keys,
} = &event
else {
panic!(
"{}: {event:?}",
"`SetEmbeddedEllipticCurveKeys` event wasn't a `SetEmbeddedEllipticCurveKeys` event"
);
}; };
EmbeddedEllipticCurveKeys::set(&mut txn, keys.network(), *validator, keys);
}
for set_decided in block.set_decided_events {
let serai_client_serai::abi::validator_sets::Event::SetDecided { set, validators } =
&set_decided
else {
panic!("`SetDecided` event wasn't a `SetDecided` event: {set_decided:?}");
};
// We only coordinate over external networks // We only coordinate over external networks
let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue }; let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue };
let validators =
validators.iter().map(|(validator, weight)| (*validator, weight.0)).collect::<Vec<_>>();
let serai = self.serai.as_of(block.block_hash);
let serai = serai.validator_sets();
let Some(validators) =
serai.participants(set.network.into()).await.map_err(|e| format!("{e:?}"))?
else {
Err(format!(
"block #{block_number} declared a new set but didn't have the participants"
))?
};
let validators = validators
.into_iter()
.map(|(validator, weight)| (SeraiAddress::from(validator), weight))
.collect::<Vec<_>>();
let in_set = validators.iter().any(|(validator, _)| *validator == self.validator); let in_set = validators.iter().any(|(validator, _)| *validator == self.validator);
if in_set { if in_set {
if u16::try_from(validators.len()).is_err() { if u16::try_from(validators.len()).is_err() {
Err("more than u16::MAX validators sent")?; Err("more than u16::MAX validators sent")?;
} }
let Ok(validators) = validators
.into_iter()
.map(|(validator, weight)| u16::try_from(weight).map(|weight| (validator, weight)))
.collect::<Result<Vec<_>, _>>()
else {
Err("validator's weight exceeded u16::MAX".to_string())?
};
// Do the summation in u32 so we don't risk a u16 overflow // Do the summation in u32 so we don't risk a u16 overflow
let total_weight = validators.iter().map(|(_, weight)| u32::from(*weight)).sum::<u32>(); let total_weight = validators.iter().map(|(_, weight)| u32::from(*weight)).sum::<u32>();
if total_weight > u32::from(MAX_KEY_SHARES_PER_SET) { if total_weight > u32::from(KeyShares::MAX_PER_SET) {
Err(format!( Err(format!(
"{set:?} has {total_weight} key shares when the max is {MAX_KEY_SHARES_PER_SET}" "{set:?} has {total_weight} key shares when the max is {}",
KeyShares::MAX_PER_SET
))?; ))?;
} }
let total_weight = u16::try_from(total_weight).unwrap(); let total_weight = u16::try_from(total_weight)
.expect("value smaller than `u16` constant but doesn't fit in `u16`");
// Fetch all of the validators' embedded elliptic curve keys // Fetch all of the validators' embedded elliptic curve keys
let mut embedded_elliptic_curve_keys = FuturesOrdered::new();
for (validator, _) in &validators {
let validator = *validator;
// try_join doesn't return a future so we need to wrap it in this additional async
// block
embedded_elliptic_curve_keys.push_back(async move {
tokio::try_join!(
// One future to fetch the substrate embedded key
serai.embedded_elliptic_curve_key(
validator.into(),
EmbeddedEllipticCurve::Embedwards25519
),
// One future to fetch the external embedded key, if there is a distinct curve
async {
// `embedded_elliptic_curves` is documented to have the second entry be the
// network-specific curve (if it exists and is distinct from Embedwards25519)
if let Some(curve) = set.network.embedded_elliptic_curves().get(1) {
serai.embedded_elliptic_curve_key(validator.into(), *curve).await.map(Some)
} else {
Ok(None)
}
}
)
.map(|(substrate_embedded_key, external_embedded_key)| {
(validator, substrate_embedded_key, external_embedded_key)
})
});
}
let mut evrf_public_keys = Vec::with_capacity(usize::from(total_weight)); let mut evrf_public_keys = Vec::with_capacity(usize::from(total_weight));
for (validator, weight) in &validators { for (validator, weight) in &validators {
let (future_validator, substrate_embedded_key, external_embedded_key) = let keys = match EmbeddedEllipticCurveKeys::get(&txn, set.network, *validator)
embedded_elliptic_curve_keys.next().await.unwrap().map_err(|e| format!("{e:?}"))?; .expect("selected validator lacked embedded elliptic curve keys")
assert_eq!(*validator, future_validator); {
let external_embedded_key = EmbeddedEllipticCurveKeysStruct::Bitcoin(substrate, external) => {
external_embedded_key.unwrap_or(substrate_embedded_key.clone()); assert_eq!(set.network, ExternalNetworkId::Bitcoin);
match (substrate_embedded_key, external_embedded_key) { (substrate, external.to_vec())
(Some(substrate_embedded_key), Some(external_embedded_key)) => {
let substrate_embedded_key = <[u8; 32]>::try_from(substrate_embedded_key)
.map_err(|_| "Embedwards25519 key wasn't 32 bytes".to_string())?;
for _ in 0 .. *weight {
evrf_public_keys.push((substrate_embedded_key, external_embedded_key.clone()));
}
} }
_ => Err("NewSet with validator missing an embedded key".to_string())?, EmbeddedEllipticCurveKeysStruct::Ethereum(substrate, external) => {
assert_eq!(set.network, ExternalNetworkId::Ethereum);
(substrate, external.to_vec())
}
EmbeddedEllipticCurveKeysStruct::Monero(substrate) => {
assert_eq!(set.network, ExternalNetworkId::Monero);
(substrate, substrate.to_vec())
}
};
for _ in 0 .. *weight {
evrf_public_keys.push(keys.clone());
} }
} }
let mut new_set = NewSetInformation { let mut new_set = NewSetInformation {
set, set,
serai_block: block.block_hash, serai_block: block.block_hash.0,
declaration_time: block.time, declaration_time: block.time,
// TODO: This should be inlined into the Processor's key gen code // TODO: This should be inlined into the Processor's key gen code
// It's legacy from when we removed participants from the key gen // It's legacy from when we removed participants from the key gen
threshold: ((total_weight * 2) / 3) + 1, threshold: ((total_weight * 2) / 3) + 1,
// TODO: Why are `validators` and `evrf_public_keys` two separate fields?
validators, validators,
evrf_public_keys, evrf_public_keys,
participant_indexes: Default::default(), participant_indexes: Default::default(),
@@ -238,7 +225,7 @@ impl<D: Db> ContinuallyRan for EphemeralEventStream<D> {
} }
for accepted_handover in block.accepted_handover_events { for accepted_handover in block.accepted_handover_events {
let serai_client::validator_sets::ValidatorSetsEvent::AcceptedHandover { set } = let serai_client_serai::abi::validator_sets::Event::AcceptedHandover { set } =
&accepted_handover &accepted_handover
else { else {
panic!("AcceptedHandover event wasn't a AcceptedHandover event: {accepted_handover:?}"); panic!("AcceptedHandover event wasn't a AcceptedHandover event: {accepted_handover:?}");

View File

@@ -4,15 +4,18 @@
use std::collections::HashMap; use std::collections::HashMap;
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use dkg::Participant; use dkg::Participant;
use serai_client::{ use serai_client_serai::abi::{
primitives::{ExternalNetworkId, SeraiAddress, Signature}, primitives::{
validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair, SlashReport}, network_id::ExternalNetworkId,
in_instructions::primitives::SignedBatch, validator_sets::{Session, ExternalValidatorSet, SlashReport},
crypto::{Signature, KeyPair},
address::SeraiAddress,
instructions::SignedBatch,
},
Transaction, Transaction,
}; };
@@ -20,6 +23,7 @@ use serai_db::*;
mod canonical; mod canonical;
pub use canonical::CanonicalEventStream; pub use canonical::CanonicalEventStream;
use canonical::last_indexed_batch_id;
mod ephemeral; mod ephemeral;
pub use ephemeral::EphemeralEventStream; pub use ephemeral::EphemeralEventStream;
@@ -38,7 +42,7 @@ pub struct NewSetInformation {
pub set: ExternalValidatorSet, pub set: ExternalValidatorSet,
/// The Serai block which declared it. /// The Serai block which declared it.
pub serai_block: [u8; 32], pub serai_block: [u8; 32],
/// The time of the block which declared it, in seconds. /// The time of the block which declared it, in seconds since the epoch.
pub declaration_time: u64, pub declaration_time: u64,
/// The threshold to use. /// The threshold to use.
pub threshold: u16, pub threshold: u16,
@@ -97,9 +101,9 @@ mod _public_db {
create_db!( create_db!(
CoordinatorSubstrate { CoordinatorSubstrate {
// Keys to set on the Serai network // Keys to set on the Serai network
Keys: (network: ExternalNetworkId) -> (Session, Vec<u8>), Keys: (network: ExternalNetworkId) -> (Session, Transaction),
// Slash reports to publish onto the Serai network // Slash reports to publish onto the Serai network
SlashReports: (network: ExternalNetworkId) -> (Session, Vec<u8>), SlashReports: (network: ExternalNetworkId) -> (Session, Transaction),
} }
); );
} }
@@ -172,20 +176,19 @@ impl Keys {
} }
} }
let tx = serai_client::validator_sets::SeraiValidatorSets::set_keys( let tx = serai_client_serai::ValidatorSets::set_keys(
set.network, set.network,
key_pair, key_pair,
signature_participants, signature_participants,
signature, signature,
); );
_public_db::Keys::set(txn, set.network, &(set.session, tx.encode())); _public_db::Keys::set(txn, set.network, &(set.session, tx));
} }
pub(crate) fn take( pub(crate) fn take(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
network: ExternalNetworkId, network: ExternalNetworkId,
) -> Option<(Session, Transaction)> { ) -> Option<(Session, Transaction)> {
let (session, tx) = _public_db::Keys::take(txn, network)?; _public_db::Keys::take(txn, network)
Some((session, <_>::decode(&mut tx.as_slice()).unwrap()))
} }
} }
@@ -194,7 +197,7 @@ pub struct SignedBatches;
impl SignedBatches { impl SignedBatches {
/// Send a `SignedBatch` to publish onto Serai. /// Send a `SignedBatch` to publish onto Serai.
pub fn send(txn: &mut impl DbTxn, batch: &SignedBatch) { pub fn send(txn: &mut impl DbTxn, batch: &SignedBatch) {
_public_db::SignedBatches::send(txn, batch.batch.network, batch); _public_db::SignedBatches::send(txn, batch.batch.network(), batch);
} }
pub(crate) fn try_recv(txn: &mut impl DbTxn, network: ExternalNetworkId) -> Option<SignedBatch> { pub(crate) fn try_recv(txn: &mut impl DbTxn, network: ExternalNetworkId) -> Option<SignedBatch> {
_public_db::SignedBatches::try_recv(txn, network) _public_db::SignedBatches::try_recv(txn, network)
@@ -221,18 +224,14 @@ impl SlashReports {
} }
} }
let tx = serai_client::validator_sets::SeraiValidatorSets::report_slashes( let tx =
set.network, serai_client_serai::ValidatorSets::report_slashes(set.network, slash_report, signature);
slash_report, _public_db::SlashReports::set(txn, set.network, &(set.session, tx));
signature,
);
_public_db::SlashReports::set(txn, set.network, &(set.session, tx.encode()));
} }
pub(crate) fn take( pub(crate) fn take(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
network: ExternalNetworkId, network: ExternalNetworkId,
) -> Option<(Session, Transaction)> { ) -> Option<(Session, Transaction)> {
let (session, tx) = _public_db::SlashReports::take(txn, network)?; _public_db::SlashReports::take(txn, network)
Some((session, <_>::decode(&mut tx.as_slice()).unwrap()))
} }
} }

View File

@@ -1,8 +1,10 @@
use core::future::Future; use core::future::Future;
use std::sync::Arc; use std::sync::Arc;
#[rustfmt::skip] use serai_client_serai::{
use serai_client::{primitives::ExternalNetworkId, in_instructions::primitives::SignedBatch, SeraiError, Serai}; abi::primitives::{network_id::ExternalNetworkId, instructions::SignedBatch},
RpcError, Serai,
};
use serai_db::{Get, DbTxn, Db, create_db}; use serai_db::{Get, DbTxn, Db, create_db};
use serai_task::ContinuallyRan; use serai_task::ContinuallyRan;
@@ -31,7 +33,7 @@ impl<D: Db> PublishBatchTask<D> {
} }
impl<D: Db> ContinuallyRan for PublishBatchTask<D> { impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
type Error = SeraiError; type Error = RpcError;
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> { fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move { async move {
@@ -43,8 +45,8 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
}; };
// If this is a Batch not yet published, save it into our unordered mapping // If this is a Batch not yet published, save it into our unordered mapping
if LastPublishedBatch::get(&txn, self.network) < Some(batch.batch.id) { if LastPublishedBatch::get(&txn, self.network) < Some(batch.batch.id()) {
BatchesToPublish::set(&mut txn, self.network, batch.batch.id, &batch); BatchesToPublish::set(&mut txn, self.network, batch.batch.id(), &batch);
} }
txn.commit(); txn.commit();
@@ -52,12 +54,8 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
// Synchronize our last published batch with the Serai network's // Synchronize our last published batch with the Serai network's
let next_to_publish = { let next_to_publish = {
// This uses the latest finalized block, not the latest cosigned block, which should be
// fine as in the worst case, the only impact is no longer attempting TX publication
let serai = self.serai.as_of_latest_finalized_block().await?;
let last_batch = serai.in_instructions().last_batch_for_network(self.network).await?;
let mut txn = self.db.txn(); let mut txn = self.db.txn();
let last_batch = crate::last_indexed_batch_id(&txn, self.network);
let mut our_last_batch = LastPublishedBatch::get(&txn, self.network); let mut our_last_batch = LastPublishedBatch::get(&txn, self.network);
while our_last_batch < last_batch { while our_last_batch < last_batch {
let next_batch = our_last_batch.map(|batch| batch + 1).unwrap_or(0); let next_batch = our_last_batch.map(|batch| batch + 1).unwrap_or(0);
@@ -68,6 +66,7 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
if let Some(last_batch) = our_last_batch { if let Some(last_batch) = our_last_batch {
LastPublishedBatch::set(&mut txn, self.network, &last_batch); LastPublishedBatch::set(&mut txn, self.network, &last_batch);
} }
txn.commit();
last_batch.map(|batch| batch + 1).unwrap_or(0) last_batch.map(|batch| batch + 1).unwrap_or(0)
}; };
@@ -75,7 +74,7 @@ impl<D: Db> ContinuallyRan for PublishBatchTask<D> {
if let Some(batch) = BatchesToPublish::get(&self.db, self.network, next_to_publish) { if let Some(batch) = BatchesToPublish::get(&self.db, self.network, next_to_publish) {
self self
.serai .serai
.publish(&serai_client::in_instructions::SeraiInInstructions::execute_batch(batch)) .publish_transaction(&serai_client_serai::InInstructions::execute_batch(batch))
.await?; .await?;
true true
} else { } else {

View File

@@ -3,7 +3,10 @@ use std::sync::Arc;
use serai_db::{DbTxn, Db}; use serai_db::{DbTxn, Db};
use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::Session, Serai}; use serai_client_serai::{
abi::primitives::{network_id::ExternalNetworkId, validator_sets::Session},
Serai,
};
use serai_task::ContinuallyRan; use serai_task::ContinuallyRan;
@@ -33,10 +36,10 @@ impl<D: Db> PublishSlashReportTask<D> {
// This uses the latest finalized block, not the latest cosigned block, which should be // This uses the latest finalized block, not the latest cosigned block, which should be
// fine as in the worst case, the only impact is no longer attempting TX publication // fine as in the worst case, the only impact is no longer attempting TX publication
let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let serai = self.serai.state().await.map_err(|e| format!("{e:?}"))?;
let serai = serai.validator_sets();
let session_after_slash_report = Session(session.0 + 1); let session_after_slash_report = Session(session.0 + 1);
let current_session = serai.session(network.into()).await.map_err(|e| format!("{e:?}"))?; let current_session =
serai.current_session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let current_session = current_session.map(|session| session.0); let current_session = current_session.map(|session| session.0);
// Only attempt to publish the slash report for session #n while session #n+1 is still // Only attempt to publish the slash report for session #n while session #n+1 is still
// active // active
@@ -55,14 +58,13 @@ impl<D: Db> PublishSlashReportTask<D> {
} }
// If this session which should publish a slash report already has, move on // If this session which should publish a slash report already has, move on
let key_pending_slash_report = if !serai.pending_slash_report(network).await.map_err(|e| format!("{e:?}"))? {
serai.key_pending_slash_report(network).await.map_err(|e| format!("{e:?}"))?;
if key_pending_slash_report.is_none() {
txn.commit(); txn.commit();
return Ok(false); return Ok(false);
}; };
match self.serai.publish(&slash_report).await { // Since this slash report is still pending, publish it
match self.serai.publish_transaction(&slash_report).await {
Ok(()) => { Ok(()) => {
txn.commit(); txn.commit();
Ok(true) Ok(true)
@@ -84,7 +86,7 @@ impl<D: Db> ContinuallyRan for PublishSlashReportTask<D> {
async move { async move {
let mut made_progress = false; let mut made_progress = false;
let mut error = None; let mut error = None;
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
let network_res = self.publish(network).await; let network_res = self.publish(network).await;
// We made progress if any network successfully published their slash report // We made progress if any network successfully published their slash report
made_progress |= network_res == Ok(true); made_progress |= network_res == Ok(true);

View File

@@ -3,7 +3,10 @@ use std::sync::Arc;
use serai_db::{DbTxn, Db}; use serai_db::{DbTxn, Db};
use serai_client::{validator_sets::primitives::ExternalValidatorSet, Serai}; use serai_client_serai::{
abi::primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet},
Serai,
};
use serai_task::ContinuallyRan; use serai_task::ContinuallyRan;
@@ -28,7 +31,7 @@ impl<D: Db> ContinuallyRan for SetKeysTask<D> {
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> { fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>> {
async move { async move {
let mut made_progress = false; let mut made_progress = false;
for network in serai_client::primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
let mut txn = self.db.txn(); let mut txn = self.db.txn();
let Some((session, keys)) = Keys::take(&mut txn, network) else { let Some((session, keys)) = Keys::take(&mut txn, network) else {
// No keys to set // No keys to set
@@ -37,10 +40,9 @@ impl<D: Db> ContinuallyRan for SetKeysTask<D> {
// This uses the latest finalized block, not the latest cosigned block, which should be // This uses the latest finalized block, not the latest cosigned block, which should be
// fine as in the worst case, the only impact is no longer attempting TX publication // fine as in the worst case, the only impact is no longer attempting TX publication
let serai = let serai = self.serai.state().await.map_err(|e| format!("{e:?}"))?;
self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let current_session =
let serai = serai.validator_sets(); serai.current_session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let current_session = serai.session(network.into()).await.map_err(|e| format!("{e:?}"))?;
let current_session = current_session.map(|session| session.0); let current_session = current_session.map(|session| session.0);
// Only attempt to set these keys if this isn't a retired session // Only attempt to set these keys if this isn't a retired session
if Some(session.0) < current_session { if Some(session.0) < current_session {
@@ -67,7 +69,7 @@ impl<D: Db> ContinuallyRan for SetKeysTask<D> {
continue; continue;
}; };
match self.serai.publish(&keys).await { match self.serai.publish_transaction(&keys).await {
Ok(()) => { Ok(()) => {
txn.commit(); txn.commit();
made_progress = true; made_progress = true;

View File

@@ -36,7 +36,7 @@ log = { version = "0.4", default-features = false, features = ["std"] }
serai-db = { path = "../../common/db", version = "0.1" } serai-db = { path = "../../common/db", version = "0.1" }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] } futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }
tendermint = { package = "tendermint-machine", path = "./tendermint", version = "0.2" } tendermint = { package = "tendermint-machine", path = "./tendermint", version = "0.2" }

View File

@@ -5,7 +5,7 @@ use ciphersuite::{group::GroupEncoding, *};
use serai_db::{Get, DbTxn, Db}; use serai_db::{Get, DbTxn, Db};
use scale::Decode; use borsh::BorshDeserialize;
use tendermint::ext::{Network, Commit}; use tendermint::ext::{Network, Commit};
@@ -62,7 +62,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
D::key( D::key(
b"tributary_blockchain", b"tributary_blockchain",
b"next_nonce", b"next_nonce",
[genesis.as_ref(), signer.to_bytes().as_ref(), order].concat(), [genesis.as_slice(), signer.to_bytes().as_slice(), order].concat(),
) )
} }
@@ -109,7 +109,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn block_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Block<T>> { pub(crate) fn block_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Block<T>> {
db.get(Self::block_key(&genesis, block)) db.get(Self::block_key(&genesis, block))
.map(|bytes| Block::<T>::read::<&[u8]>(&mut bytes.as_ref()).unwrap()) .map(|bytes| Block::<T>::read::<&[u8]>(&mut bytes.as_slice()).unwrap())
} }
pub(crate) fn commit_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Vec<u8>> { pub(crate) fn commit_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Vec<u8>> {
@@ -169,7 +169,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
// we must have a commit per valid hash // we must have a commit per valid hash
let commit = Self::commit_from_db(db, genesis, &hash).unwrap(); let commit = Self::commit_from_db(db, genesis, &hash).unwrap();
// commit has to be valid if it is coming from our db // commit has to be valid if it is coming from our db
Some(Commit::<N::SignatureScheme>::decode(&mut commit.as_ref()).unwrap()) Some(Commit::<N::SignatureScheme>::deserialize_reader(&mut commit.as_slice()).unwrap())
}; };
let unsigned_in_chain = let unsigned_in_chain =
|hash: [u8; 32]| db.get(Self::unsigned_included_key(&self.genesis, &hash)).is_some(); |hash: [u8; 32]| db.get(Self::unsigned_included_key(&self.genesis, &hash)).is_some();
@@ -244,7 +244,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
let commit = |block: u64| -> Option<Commit<N::SignatureScheme>> { let commit = |block: u64| -> Option<Commit<N::SignatureScheme>> {
let commit = self.commit_by_block_number(block)?; let commit = self.commit_by_block_number(block)?;
// commit has to be valid if it is coming from our db // commit has to be valid if it is coming from our db
Some(Commit::<N::SignatureScheme>::decode(&mut commit.as_ref()).unwrap()) Some(Commit::<N::SignatureScheme>::deserialize_reader(&mut commit.as_slice()).unwrap())
}; };
let mut txn_db = db.clone(); let mut txn_db = db.clone();

View File

@@ -3,10 +3,11 @@ use std::{sync::Arc, io};
use zeroize::Zeroizing; use zeroize::Zeroizing;
use borsh::BorshDeserialize;
use ciphersuite::*; use ciphersuite::*;
use dalek_ff_group::Ristretto; use dalek_ff_group::Ristretto;
use scale::Decode;
use futures_channel::mpsc::UnboundedReceiver; use futures_channel::mpsc::UnboundedReceiver;
use futures_util::{StreamExt, SinkExt}; use futures_util::{StreamExt, SinkExt};
use ::tendermint::{ use ::tendermint::{
@@ -177,7 +178,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
let block_number = BlockNumber(blockchain.block_number()); let block_number = BlockNumber(blockchain.block_number());
let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) { let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) {
Commit::<Validators>::decode(&mut commit.as_ref()).unwrap().end_time Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap().end_time
} else { } else {
start_time start_time
}; };
@@ -276,8 +277,8 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
} }
let block = TendermintBlock(block.serialize()); let block = TendermintBlock(block.serialize());
let mut commit_ref = commit.as_ref(); let mut commit_ref = commit.as_slice();
let Ok(commit) = Commit::<Arc<Validators>>::decode(&mut commit_ref) else { let Ok(commit) = Commit::<Arc<Validators>>::deserialize_reader(&mut commit_ref) else {
log::error!("sent an invalidly serialized commit"); log::error!("sent an invalidly serialized commit");
return false; return false;
}; };
@@ -327,7 +328,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
Some(&TENDERMINT_MESSAGE) => { Some(&TENDERMINT_MESSAGE) => {
let Ok(msg) = let Ok(msg) =
SignedMessageFor::<TendermintNetwork<D, T, P>>::decode::<&[u8]>(&mut &msg[1 ..]) SignedMessageFor::<TendermintNetwork<D, T, P>>::deserialize_reader(&mut &msg[1 ..])
else { else {
log::error!("received invalid tendermint message"); log::error!("received invalid tendermint message");
return false; return false;
@@ -367,15 +368,17 @@ impl<D: Db, T: TransactionTrait> TributaryReader<D, T> {
Blockchain::<D, T>::commit_from_db(&self.0, self.1, hash) Blockchain::<D, T>::commit_from_db(&self.0, self.1, hash)
} }
pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option<Commit<Validators>> { pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option<Commit<Validators>> {
self.commit(hash).map(|commit| Commit::<Validators>::decode(&mut commit.as_ref()).unwrap()) self
.commit(hash)
.map(|commit| Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap())
} }
pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> { pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> {
Blockchain::<D, T>::block_after(&self.0, self.1, hash) Blockchain::<D, T>::block_after(&self.0, self.1, hash)
} }
pub fn time_of_block(&self, hash: &[u8; 32]) -> Option<u64> { pub fn time_of_block(&self, hash: &[u8; 32]) -> Option<u64> {
self self.commit(hash).map(|commit| {
.commit(hash) Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap().end_time
.map(|commit| Commit::<Validators>::decode(&mut commit.as_ref()).unwrap().end_time) })
} }
pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool { pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool {

View File

@@ -21,7 +21,7 @@ use schnorr::{
use serai_db::Db; use serai_db::Db;
use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize};
use tendermint::{ use tendermint::{
SignedMessageFor, SignedMessageFor,
ext::{ ext::{
@@ -248,7 +248,7 @@ impl Weights for Validators {
} }
} }
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct TendermintBlock(pub Vec<u8>); pub struct TendermintBlock(pub Vec<u8>);
impl BlockTrait for TendermintBlock { impl BlockTrait for TendermintBlock {
type Id = [u8; 32]; type Id = [u8; 32];
@@ -300,7 +300,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Network for TendermintNetwork<D, T, P>
fn broadcast(&mut self, msg: SignedMessageFor<Self>) -> impl Send + Future<Output = ()> { fn broadcast(&mut self, msg: SignedMessageFor<Self>) -> impl Send + Future<Output = ()> {
async move { async move {
let mut to_broadcast = vec![TENDERMINT_MESSAGE]; let mut to_broadcast = vec![TENDERMINT_MESSAGE];
to_broadcast.extend(msg.encode()); msg.serialize(&mut to_broadcast).unwrap();
self.p2p.broadcast(self.genesis, to_broadcast).await self.p2p.broadcast(self.genesis, to_broadcast).await
} }
} }
@@ -390,7 +390,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Network for TendermintNetwork<D, T, P>
return invalid_block(); return invalid_block();
}; };
let encoded_commit = commit.encode(); let encoded_commit = borsh::to_vec(&commit).unwrap();
loop { loop {
let block_res = self.blockchain.write().await.add_block::<Self>( let block_res = self.blockchain.write().await.add_block::<Self>(
&block, &block,

View File

@@ -1,6 +1,6 @@
use std::io; use std::io;
use scale::{Encode, Decode, IoReader}; use borsh::BorshDeserialize;
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
@@ -27,14 +27,14 @@ pub enum TendermintTx {
impl ReadWrite for TendermintTx { impl ReadWrite for TendermintTx {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> { fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Evidence::decode(&mut IoReader(reader)) Evidence::deserialize_reader(reader)
.map(TendermintTx::SlashEvidence) .map(TendermintTx::SlashEvidence)
.map_err(|_| io::Error::new(io::ErrorKind::InvalidData, "invalid evidence format")) .map_err(|_| io::Error::new(io::ErrorKind::InvalidData, "invalid evidence format"))
} }
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> { fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self { match self {
TendermintTx::SlashEvidence(ev) => writer.write_all(&ev.encode()), TendermintTx::SlashEvidence(ev) => writer.write_all(&borsh::to_vec(&ev).unwrap()),
} }
} }
} }

View File

@@ -10,8 +10,6 @@ use dalek_ff_group::Ristretto;
use ciphersuite::*; use ciphersuite::*;
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use scale::Encode;
use ::tendermint::{ use ::tendermint::{
ext::{Network, Signer as SignerTrait, SignatureScheme, BlockNumber, RoundNumber}, ext::{Network, Signer as SignerTrait, SignatureScheme, BlockNumber, RoundNumber},
SignedMessageFor, DataFor, Message, SignedMessage, Data, Evidence, SignedMessageFor, DataFor, Message, SignedMessage, Data, Evidence,
@@ -200,7 +198,7 @@ pub async fn signed_from_data<N: Network>(
round: RoundNumber(round_number), round: RoundNumber(round_number),
data, data,
}; };
let sig = signer.sign(&msg.encode()).await; let sig = signer.sign(&borsh::to_vec(&msg).unwrap()).await;
SignedMessage { msg, sig } SignedMessage { msg, sig }
} }
@@ -213,5 +211,5 @@ pub async fn random_evidence_tx<N: Network>(
let data = Data::Proposal(Some(RoundNumber(0)), b); let data = Data::Proposal(Some(RoundNumber(0)), b);
let signer_id = signer.validator_id().await.unwrap(); let signer_id = signer.validator_id().await.unwrap();
let signed = signed_from_data::<N>(signer, signer_id, 0, 0, data).await; let signed = signed_from_data::<N>(signer, signer_id, 0, 0, data).await;
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode())) TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap()))
} }

View File

@@ -6,8 +6,6 @@ use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto; use dalek_ff_group::Ristretto;
use ciphersuite::*; use ciphersuite::*;
use scale::Encode;
use tendermint::{ use tendermint::{
time::CanonicalInstant, time::CanonicalInstant,
round::RoundData, round::RoundData,
@@ -52,7 +50,10 @@ async fn invalid_valid_round() {
async move { async move {
let data = Data::Proposal(valid_round, TendermintBlock(vec![])); let data = Data::Proposal(valid_round, TendermintBlock(vec![]));
let signed = signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, data).await; let signed = signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, data).await;
(signed.clone(), TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode()))) (
signed.clone(),
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap())),
)
} }
}; };
@@ -70,7 +71,8 @@ async fn invalid_valid_round() {
let mut random_sig = [0u8; 64]; let mut random_sig = [0u8; 64];
OsRng.fill_bytes(&mut random_sig); OsRng.fill_bytes(&mut random_sig);
signed.sig = random_sig; signed.sig = random_sig;
let tx = TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode())); let tx =
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap()));
// should fail // should fail
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
@@ -90,7 +92,10 @@ async fn invalid_precommit_signature() {
let signed = let signed =
signed_from_data::<N>(signer.clone().into(), signer_id, 1, 0, Data::Precommit(precommit)) signed_from_data::<N>(signer.clone().into(), signer_id, 1, 0, Data::Precommit(precommit))
.await; .await;
(signed.clone(), TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(signed.encode()))) (
signed.clone(),
TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap())),
)
} }
}; };
@@ -120,7 +125,8 @@ async fn invalid_precommit_signature() {
let mut random_sig = [0u8; 64]; let mut random_sig = [0u8; 64];
OsRng.fill_bytes(&mut random_sig); OsRng.fill_bytes(&mut random_sig);
signed.sig = random_sig; signed.sig = random_sig;
let tx = TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(signed.encode())); let tx =
TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap()));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
} }
} }
@@ -138,24 +144,32 @@ async fn evidence_with_prevote() {
// it should fail for all reasons. // it should fail for all reasons.
let mut txs = vec![]; let mut txs = vec![];
txs.push(TendermintTx::SlashEvidence(Evidence::InvalidPrecommit( txs.push(TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) borsh::to_vec(
.await &&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.encode(), .await,
)
.unwrap(),
))); )));
txs.push(TendermintTx::SlashEvidence(Evidence::InvalidValidRound( txs.push(TendermintTx::SlashEvidence(Evidence::InvalidValidRound(
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) borsh::to_vec(
.await &signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.encode(), .await,
)
.unwrap(),
))); )));
// Since these require a second message, provide this one again // Since these require a second message, provide this one again
// ConflictingMessages can be fired for actually conflicting Prevotes however // ConflictingMessages can be fired for actually conflicting Prevotes however
txs.push(TendermintTx::SlashEvidence(Evidence::ConflictingMessages( txs.push(TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) borsh::to_vec(
.await &signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.encode(), .await,
signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) )
.await .unwrap(),
.encode(), borsh::to_vec(
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await,
)
.unwrap(),
))); )));
txs txs
} }
@@ -189,16 +203,16 @@ async fn conflicting_msgs_evidence_tx() {
// non-conflicting data should fail // non-conflicting data should fail
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
)); ));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
// conflicting data should pass // conflicting data should pass
let signed_2 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await; let signed_2 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap();
@@ -206,16 +220,16 @@ async fn conflicting_msgs_evidence_tx() {
// (except for Precommit) // (except for Precommit)
let signed_2 = signed_for_b_r(0, 1, Data::Proposal(None, TendermintBlock(vec![0x22]))).await; let signed_2 = signed_for_b_r(0, 1, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
// Proposals for different block numbers should also fail as evidence // Proposals for different block numbers should also fail as evidence
let signed_2 = signed_for_b_r(1, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await; let signed_2 = signed_for_b_r(1, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
} }
@@ -225,16 +239,16 @@ async fn conflicting_msgs_evidence_tx() {
// non-conflicting data should fail // non-conflicting data should fail
let signed_1 = signed_for_b_r(0, 0, Data::Prevote(Some([0x11; 32]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Prevote(Some([0x11; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
)); ));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
// conflicting data should pass // conflicting data should pass
let signed_2 = signed_for_b_r(0, 0, Data::Prevote(Some([0x22; 32]))).await; let signed_2 = signed_for_b_r(0, 0, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap();
@@ -242,16 +256,16 @@ async fn conflicting_msgs_evidence_tx() {
// (except for Precommit) // (except for Precommit)
let signed_2 = signed_for_b_r(0, 1, Data::Prevote(Some([0x22; 32]))).await; let signed_2 = signed_for_b_r(0, 1, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
// Proposals for different block numbers should also fail as evidence // Proposals for different block numbers should also fail as evidence
let signed_2 = signed_for_b_r(1, 0, Data::Prevote(Some([0x22; 32]))).await; let signed_2 = signed_for_b_r(1, 0, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
} }
@@ -273,8 +287,8 @@ async fn conflicting_msgs_evidence_tx() {
.await; .await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
// update schema so that we don't fail due to invalid signature // update schema so that we don't fail due to invalid signature
@@ -292,8 +306,8 @@ async fn conflicting_msgs_evidence_tx() {
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![]))).await;
let signed_2 = signed_for_b_r(0, 0, Data::Prevote(None)).await; let signed_2 = signed_for_b_r(0, 0, Data::Prevote(None)).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
signed_1.encode(), borsh::to_vec(&signed_1).unwrap(),
signed_2.encode(), borsh::to_vec(&signed_2).unwrap(),
)); ));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
} }

View File

@@ -6,7 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint" repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021" edition = "2021"
rust-version = "1.75" rust-version = "1.77"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -21,7 +21,7 @@ thiserror = { version = "2", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
parity-scale-codec = { version = "3", default-features = false, features = ["std", "derive"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
futures-util = { version = "0.3", default-features = false, features = ["std", "async-await-macro", "sink", "channel"] } futures-util = { version = "0.3", default-features = false, features = ["std", "async-await-macro", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }

View File

@@ -3,33 +3,41 @@ use std::{sync::Arc, collections::HashSet};
use thiserror::Error; use thiserror::Error;
use parity_scale_codec::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize};
use crate::{SignedMessageFor, SlashEvent, commit_msg}; use crate::{SignedMessageFor, SlashEvent, commit_msg};
/// An alias for a series of traits required for a type to be usable as a validator ID, /// An alias for a series of traits required for a type to be usable as a validator ID,
/// automatically implemented for all types satisfying those traits. /// automatically implemented for all types satisfying those traits.
pub trait ValidatorId: pub trait ValidatorId:
Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + BorshSerialize + BorshDeserialize
{ {
} }
impl<V: Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode> ValidatorId #[rustfmt::skip]
for V impl<
V: Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + BorshSerialize + BorshDeserialize,
> ValidatorId for V
{ {
} }
/// An alias for a series of traits required for a type to be usable as a signature, /// An alias for a series of traits required for a type to be usable as a signature,
/// automatically implemented for all types satisfying those traits. /// automatically implemented for all types satisfying those traits.
pub trait Signature: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode {} pub trait Signature:
impl<S: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode> Signature for S {} Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize
{
}
impl<S: Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize> Signature
for S
{
}
// Type aliases which are distinct according to the type system // Type aliases which are distinct according to the type system
/// A struct containing a Block Number, wrapped to have a distinct type. /// A struct containing a Block Number, wrapped to have a distinct type.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub struct BlockNumber(pub u64); pub struct BlockNumber(pub u64);
/// A struct containing a round number, wrapped to have a distinct type. /// A struct containing a round number, wrapped to have a distinct type.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub struct RoundNumber(pub u32); pub struct RoundNumber(pub u32);
/// A signer for a validator. /// A signer for a validator.
@@ -127,7 +135,7 @@ impl<S: SignatureScheme> SignatureScheme for Arc<S> {
/// A commit for a specific block. /// A commit for a specific block.
/// ///
/// The list of validators have weight exceeding the threshold for a valid commit. /// The list of validators have weight exceeding the threshold for a valid commit.
#[derive(PartialEq, Debug, Encode, Decode)] #[derive(PartialEq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Commit<S: SignatureScheme> { pub struct Commit<S: SignatureScheme> {
/// End time of the round which created this commit, used as the start time of the next block. /// End time of the round which created this commit, used as the start time of the next block.
pub end_time: u64, pub end_time: u64,
@@ -185,7 +193,7 @@ impl<W: Weights> Weights for Arc<W> {
} }
/// Simplified error enum representing a block's validity. /// Simplified error enum representing a block's validity.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error, Encode, Decode)] #[derive(Clone, Copy, PartialEq, Eq, Debug, Error, BorshSerialize, BorshDeserialize)]
pub enum BlockError { pub enum BlockError {
/// Malformed block which is wholly invalid. /// Malformed block which is wholly invalid.
#[error("invalid block")] #[error("invalid block")]
@@ -197,9 +205,20 @@ pub enum BlockError {
} }
/// Trait representing a Block. /// Trait representing a Block.
pub trait Block: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode { pub trait Block:
Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize
{
// Type used to identify blocks. Presumably a cryptographic hash of the block. // Type used to identify blocks. Presumably a cryptographic hash of the block.
type Id: Send + Sync + Copy + Clone + PartialEq + Eq + AsRef<[u8]> + Debug + Encode + Decode; type Id: Send
+ Sync
+ Copy
+ Clone
+ PartialEq
+ Eq
+ AsRef<[u8]>
+ Debug
+ BorshSerialize
+ BorshDeserialize;
/// Return the deterministic, unique ID for this block. /// Return the deterministic, unique ID for this block.
fn id(&self) -> Self::Id; fn id(&self) -> Self::Id;

View File

@@ -1,5 +1,3 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug; use core::fmt::Debug;
use std::{ use std::{
@@ -8,7 +6,7 @@ use std::{
collections::{VecDeque, HashMap}, collections::{VecDeque, HashMap},
}; };
use parity_scale_codec::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize};
use futures_channel::mpsc; use futures_channel::mpsc;
use futures_util::{ use futures_util::{
@@ -43,14 +41,14 @@ pub fn commit_msg(end_time: u64, id: &[u8]) -> Vec<u8> {
[&end_time.to_le_bytes(), id].concat() [&end_time.to_le_bytes(), id].concat()
} }
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub enum Step { pub enum Step {
Propose, Propose,
Prevote, Prevote,
Precommit, Precommit,
} }
#[derive(Clone, Eq, Debug, Encode, Decode)] #[derive(Clone, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Data<B: Block, S: Signature> { pub enum Data<B: Block, S: Signature> {
Proposal(Option<RoundNumber>, B), Proposal(Option<RoundNumber>, B),
Prevote(Option<B::Id>), Prevote(Option<B::Id>),
@@ -92,7 +90,7 @@ impl<B: Block, S: Signature> Data<B, S> {
} }
} }
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Message<V: ValidatorId, B: Block, S: Signature> { pub struct Message<V: ValidatorId, B: Block, S: Signature> {
pub sender: V, pub sender: V,
pub block: BlockNumber, pub block: BlockNumber,
@@ -102,7 +100,7 @@ pub struct Message<V: ValidatorId, B: Block, S: Signature> {
} }
/// A signed Tendermint consensus message to be broadcast to the other validators. /// A signed Tendermint consensus message to be broadcast to the other validators.
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedMessage<V: ValidatorId, B: Block, S: Signature> { pub struct SignedMessage<V: ValidatorId, B: Block, S: Signature> {
pub msg: Message<V, B, S>, pub msg: Message<V, B, S>,
pub sig: S, pub sig: S,
@@ -119,18 +117,18 @@ impl<V: ValidatorId, B: Block, S: Signature> SignedMessage<V, B, S> {
&self, &self,
signer: &Scheme, signer: &Scheme,
) -> bool { ) -> bool {
signer.verify(self.msg.sender, &self.msg.encode(), &self.sig) signer.verify(self.msg.sender, &borsh::to_vec(&self.msg).unwrap(), &self.sig)
} }
} }
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode)] #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum SlashReason { pub enum SlashReason {
FailToPropose, FailToPropose,
InvalidBlock, InvalidBlock,
InvalidProposer, InvalidProposer,
} }
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Evidence { pub enum Evidence {
ConflictingMessages(Vec<u8>, Vec<u8>), ConflictingMessages(Vec<u8>, Vec<u8>),
InvalidPrecommit(Vec<u8>), InvalidPrecommit(Vec<u8>),
@@ -161,7 +159,7 @@ pub type SignedMessageFor<N> = SignedMessage<
>; >;
pub fn decode_signed_message<N: Network>(mut data: &[u8]) -> Option<SignedMessageFor<N>> { pub fn decode_signed_message<N: Network>(mut data: &[u8]) -> Option<SignedMessageFor<N>> {
SignedMessageFor::<N>::decode(&mut data).ok() SignedMessageFor::<N>::deserialize_reader(&mut data).ok()
} }
fn decode_and_verify_signed_message<N: Network>( fn decode_and_verify_signed_message<N: Network>(
@@ -341,7 +339,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
target: "tendermint", target: "tendermint",
"proposer for block {}, round {round:?} was {} (me: {res})", "proposer for block {}, round {round:?} was {} (me: {res})",
self.block.number.0, self.block.number.0,
hex::encode(proposer.encode()), hex::encode(borsh::to_vec(&proposer).unwrap()),
); );
res res
} }
@@ -422,7 +420,11 @@ impl<N: Network + 'static> TendermintMachine<N> {
// TODO: If the new slash event has evidence, emit to prevent a low-importance slash from // TODO: If the new slash event has evidence, emit to prevent a low-importance slash from
// cancelling emission of high-importance slashes // cancelling emission of high-importance slashes
if !self.block.slashes.contains(&validator) { if !self.block.slashes.contains(&validator) {
log::info!(target: "tendermint", "Slashing validator {}", hex::encode(validator.encode())); log::info!(
target: "tendermint",
"Slashing validator {}",
hex::encode(borsh::to_vec(&validator).unwrap()),
);
self.block.slashes.insert(validator); self.block.slashes.insert(validator);
self.network.slash(validator, slash_event).await; self.network.slash(validator, slash_event).await;
} }
@@ -672,7 +674,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
self self
.slash( .slash(
msg.sender, msg.sender,
SlashEvent::WithEvidence(Evidence::InvalidPrecommit(signed.encode())), SlashEvent::WithEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap())),
) )
.await; .await;
Err(TendermintError::Malicious)?; Err(TendermintError::Malicious)?;
@@ -743,7 +745,10 @@ impl<N: Network + 'static> TendermintMachine<N> {
self.broadcast(Data::Prevote(None)); self.broadcast(Data::Prevote(None));
} }
self self
.slash(msg.sender, SlashEvent::WithEvidence(Evidence::InvalidValidRound(msg.encode()))) .slash(
msg.sender,
SlashEvent::WithEvidence(Evidence::InvalidValidRound(borsh::to_vec(&msg).unwrap())),
)
.await; .await;
Err(TendermintError::Malicious)?; Err(TendermintError::Malicious)?;
} }
@@ -1034,7 +1039,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
while !messages.is_empty() { while !messages.is_empty() {
self.network.broadcast( self.network.broadcast(
SignedMessageFor::<N>::decode(&mut IoReader(&mut messages)) SignedMessageFor::<N>::deserialize_reader(&mut messages)
.expect("saved invalid message to DB") .expect("saved invalid message to DB")
).await; ).await;
} }
@@ -1059,7 +1064,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
} { } {
if our_message { if our_message {
assert!(sig.is_none()); assert!(sig.is_none());
sig = Some(self.signer.sign(&msg.encode()).await); sig = Some(self.signer.sign(&borsh::to_vec(&msg).unwrap()).await);
} }
let sig = sig.unwrap(); let sig = sig.unwrap();
@@ -1079,7 +1084,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
let message_tape_key = message_tape_key(self.genesis); let message_tape_key = message_tape_key(self.genesis);
let mut txn = self.db.txn(); let mut txn = self.db.txn();
let mut message_tape = txn.get(&message_tape_key).unwrap_or(vec![]); let mut message_tape = txn.get(&message_tape_key).unwrap_or(vec![]);
message_tape.extend(signed_msg.encode()); signed_msg.serialize(&mut message_tape).unwrap();
txn.put(&message_tape_key, message_tape); txn.put(&message_tape_key, message_tape);
txn.commit(); txn.commit();
} }

View File

@@ -1,7 +1,5 @@
use std::{sync::Arc, collections::HashMap}; use std::{sync::Arc, collections::HashMap};
use parity_scale_codec::Encode;
use crate::{ext::*, RoundNumber, Step, DataFor, SignedMessageFor, Evidence}; use crate::{ext::*, RoundNumber, Step, DataFor, SignedMessageFor, Evidence};
type RoundLog<N> = HashMap<<N as Network>::ValidatorId, HashMap<Step, SignedMessageFor<N>>>; type RoundLog<N> = HashMap<<N as Network>::ValidatorId, HashMap<Step, SignedMessageFor<N>>>;
@@ -39,7 +37,10 @@ impl<N: Network> MessageLog<N> {
target: "tendermint", target: "tendermint",
"Validator sent multiple messages for the same block + round + step" "Validator sent multiple messages for the same block + round + step"
); );
Err(Evidence::ConflictingMessages(existing.encode(), signed.encode()))?; Err(Evidence::ConflictingMessages(
borsh::to_vec(&existing).unwrap(),
borsh::to_vec(&signed).unwrap(),
))?;
} }
return Ok(false); return Ok(false);
} }

View File

@@ -4,7 +4,7 @@ use std::{
time::{UNIX_EPOCH, SystemTime, Duration}, time::{UNIX_EPOCH, SystemTime, Duration},
}; };
use parity_scale_codec::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize};
use futures_util::sink::SinkExt; use futures_util::sink::SinkExt;
use tokio::{sync::RwLock, time::sleep}; use tokio::{sync::RwLock, time::sleep};
@@ -89,7 +89,7 @@ impl Weights for TestWeights {
} }
} }
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
struct TestBlock { struct TestBlock {
id: TestBlockId, id: TestBlockId,
valid: Result<(), BlockError>, valid: Result<(), BlockError>,

View File

@@ -21,7 +21,6 @@ workspace = true
zeroize = { version = "^1.5", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
@@ -30,14 +29,14 @@ dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = fals
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] }
serai-db = { path = "../../common/db" } serai-db = { path = "../../common/db" }
serai-task = { path = "../../common/task", version = "0.1" } serai-task = { path = "../../common/task", version = "0.1" }
tributary-sdk = { path = "../tributary-sdk" } tributary-sdk = { path = "../tributary-sdk" }
serai-cosign = { path = "../cosign" } serai-cosign-types = { path = "../cosign/types" }
serai-coordinator-substrate = { path = "../substrate" } serai-coordinator-substrate = { path = "../substrate" }
messages = { package = "serai-processor-messages", path = "../../processor/messages" } messages = { package = "serai-processor-messages", path = "../../processor/messages" }

View File

@@ -1,22 +1,19 @@
#![expect(clippy::cast_possible_truncation)]
use std::collections::HashMap; use std::collections::HashMap;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ExternalValidatorSet}; use serai_primitives::{BlockHash, validator_sets::ExternalValidatorSet, address::SeraiAddress};
use messages::sign::{VariantSignId, SignId}; use messages::sign::{VariantSignId, SignId};
use serai_db::*; use serai_db::*;
use serai_cosign::CosignIntent; use serai_cosign_types::CosignIntent;
use crate::transaction::SigningProtocolRound; use crate::transaction::SigningProtocolRound;
/// A topic within the database which the group participates in /// A topic within the database which the group participates in
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Topic { pub enum Topic {
/// Vote to remove a participant /// Vote to remove a participant
RemoveParticipant { RemoveParticipant {
@@ -125,7 +122,7 @@ impl Topic {
Topic::DkgConfirmation { attempt, round: _ } => Some({ Topic::DkgConfirmation { attempt, round: _ } => Some({
let id = { let id = {
let mut id = [0; 32]; let mut id = [0; 32];
let encoded_set = set.encode(); let encoded_set = borsh::to_vec(&set).unwrap();
id[.. encoded_set.len()].copy_from_slice(&encoded_set); id[.. encoded_set.len()].copy_from_slice(&encoded_set);
VariantSignId::Batch(id) VariantSignId::Batch(id)
}; };
@@ -235,18 +232,18 @@ create_db!(
SlashPoints: (set: ExternalValidatorSet, validator: SeraiAddress) -> u32, SlashPoints: (set: ExternalValidatorSet, validator: SeraiAddress) -> u32,
// The cosign intent for a Substrate block // The cosign intent for a Substrate block
CosignIntents: (set: ExternalValidatorSet, substrate_block_hash: [u8; 32]) -> CosignIntent, CosignIntents: (set: ExternalValidatorSet, substrate_block_hash: BlockHash) -> CosignIntent,
// The latest Substrate block to cosign. // The latest Substrate block to cosign.
LatestSubstrateBlockToCosign: (set: ExternalValidatorSet) -> [u8; 32], LatestSubstrateBlockToCosign: (set: ExternalValidatorSet) -> BlockHash,
// The hash of the block we're actively cosigning. // The hash of the block we're actively cosigning.
ActivelyCosigning: (set: ExternalValidatorSet) -> [u8; 32], ActivelyCosigning: (set: ExternalValidatorSet) -> BlockHash,
// If this block has already been cosigned. // If this block has already been cosigned.
Cosigned: (set: ExternalValidatorSet, substrate_block_hash: [u8; 32]) -> (), Cosigned: (set: ExternalValidatorSet, substrate_block_hash: BlockHash) -> (),
// The plans to recognize upon a `Transaction::SubstrateBlock` being included on-chain. // The plans to recognize upon a `Transaction::SubstrateBlock` being included on-chain.
SubstrateBlockPlans: ( SubstrateBlockPlans: (
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32] substrate_block_hash: BlockHash
) -> Vec<[u8; 32]>, ) -> Vec<[u8; 32]>,
// The weight accumulated for a topic. // The weight accumulated for a topic.
@@ -294,26 +291,26 @@ impl TributaryDb {
pub(crate) fn latest_substrate_block_to_cosign( pub(crate) fn latest_substrate_block_to_cosign(
getter: &impl Get, getter: &impl Get,
set: ExternalValidatorSet, set: ExternalValidatorSet,
) -> Option<[u8; 32]> { ) -> Option<BlockHash> {
LatestSubstrateBlockToCosign::get(getter, set) LatestSubstrateBlockToCosign::get(getter, set)
} }
pub(crate) fn set_latest_substrate_block_to_cosign( pub(crate) fn set_latest_substrate_block_to_cosign(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
) { ) {
LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash); LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash);
} }
pub(crate) fn actively_cosigning( pub(crate) fn actively_cosigning(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
) -> Option<[u8; 32]> { ) -> Option<BlockHash> {
ActivelyCosigning::get(txn, set) ActivelyCosigning::get(txn, set)
} }
pub(crate) fn start_cosigning( pub(crate) fn start_cosigning(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
substrate_block_number: u64, substrate_block_number: u64,
) { ) {
assert!( assert!(
@@ -338,14 +335,14 @@ impl TributaryDb {
pub(crate) fn mark_cosigned( pub(crate) fn mark_cosigned(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
) { ) {
Cosigned::set(txn, set, substrate_block_hash, &()); Cosigned::set(txn, set, substrate_block_hash, &());
} }
pub(crate) fn cosigned( pub(crate) fn cosigned(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
) -> bool { ) -> bool {
Cosigned::get(txn, set, substrate_block_hash).is_some() Cosigned::get(txn, set, substrate_block_hash).is_some()
} }

View File

@@ -8,9 +8,10 @@ use std::collections::HashMap;
use ciphersuite::group::GroupEncoding; use ciphersuite::group::GroupEncoding;
use dkg::Participant; use dkg::Participant;
use serai_client::{ use serai_primitives::{
primitives::SeraiAddress, BlockHash,
validator_sets::primitives::{ExternalValidatorSet, Slash}, validator_sets::{ExternalValidatorSet, Slash},
address::SeraiAddress,
}; };
use serai_db::*; use serai_db::*;
@@ -25,7 +26,7 @@ use tributary_sdk::{
Transaction as TributaryTransaction, Block, TributaryReader, P2p, Transaction as TributaryTransaction, Block, TributaryReader, P2p,
}; };
use serai_cosign::CosignIntent; use serai_cosign_types::CosignIntent;
use serai_coordinator_substrate::NewSetInformation; use serai_coordinator_substrate::NewSetInformation;
use messages::sign::{VariantSignId, SignId}; use messages::sign::{VariantSignId, SignId};
@@ -79,7 +80,7 @@ impl CosignIntents {
fn take( fn take(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
) -> Option<CosignIntent> { ) -> Option<CosignIntent> {
db::CosignIntents::take(txn, set, substrate_block_hash) db::CosignIntents::take(txn, set, substrate_block_hash)
} }
@@ -113,7 +114,7 @@ impl SubstrateBlockPlans {
pub fn set( pub fn set(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
plans: &Vec<[u8; 32]>, plans: &Vec<[u8; 32]>,
) { ) {
db::SubstrateBlockPlans::set(txn, set, substrate_block_hash, plans); db::SubstrateBlockPlans::set(txn, set, substrate_block_hash, plans);
@@ -121,7 +122,7 @@ impl SubstrateBlockPlans {
fn take( fn take(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
set: ExternalValidatorSet, set: ExternalValidatorSet,
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
) -> Option<Vec<[u8; 32]>> { ) -> Option<Vec<[u8; 32]>> {
db::SubstrateBlockPlans::take(txn, set, substrate_block_hash) db::SubstrateBlockPlans::take(txn, set, substrate_block_hash)
} }
@@ -574,14 +575,9 @@ impl<TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'_, TD, TDT, P> {
}; };
let msgs = ( let msgs = (
decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(&data.0).unwrap(), decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(&data.0).unwrap(),
if data.1.is_some() { data.1.as_ref().map(|data| {
Some( decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(data).unwrap()
decode_signed_message::<TendermintNetwork<TD, Transaction, P>>(&data.1.unwrap()) }),
.unwrap(),
)
} else {
None
},
); );
// Since anything with evidence is fundamentally faulty behavior, not just temporal // Since anything with evidence is fundamentally faulty behavior, not just temporal

View File

@@ -12,10 +12,9 @@ use ciphersuite::{
use dalek_ff_group::Ristretto; use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET}; use serai_primitives::{BlockHash, validator_sets::KeyShares, address::SeraiAddress};
use messages::sign::VariantSignId; use messages::sign::VariantSignId;
@@ -29,7 +28,7 @@ use tributary_sdk::{
use crate::db::Topic; use crate::db::Topic;
/// The round this data is for, within a signing protocol. /// The round this data is for, within a signing protocol.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum SigningProtocolRound { pub enum SigningProtocolRound {
/// A preprocess. /// A preprocess.
Preprocess, Preprocess,
@@ -138,7 +137,7 @@ pub enum Transaction {
/// be the one selected to be cosigned. /// be the one selected to be cosigned.
Cosign { Cosign {
/// The hash of the Substrate block to cosign /// The hash of the Substrate block to cosign
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
}, },
/// Note an intended-to-be-cosigned Substrate block as cosigned /// Note an intended-to-be-cosigned Substrate block as cosigned
@@ -176,7 +175,7 @@ pub enum Transaction {
/// cosigning the block in question, it'd be safe to provide this and move on to the next cosign. /// cosigning the block in question, it'd be safe to provide this and move on to the next cosign.
Cosigned { Cosigned {
/// The hash of the Substrate block which was cosigned /// The hash of the Substrate block which was cosigned
substrate_block_hash: [u8; 32], substrate_block_hash: BlockHash,
}, },
/// Acknowledge a Substrate block /// Acknowledge a Substrate block
@@ -187,7 +186,7 @@ pub enum Transaction {
/// resulting from its handling. /// resulting from its handling.
SubstrateBlock { SubstrateBlock {
/// The hash of the Substrate block /// The hash of the Substrate block
hash: [u8; 32], hash: BlockHash,
}, },
/// Acknowledge a Batch /// Acknowledge a Batch
@@ -242,19 +241,20 @@ impl TransactionTrait for Transaction {
fn kind(&self) -> TransactionKind { fn kind(&self) -> TransactionKind {
match self { match self {
Transaction::RemoveParticipant { participant, signed } => TransactionKind::Signed( Transaction::RemoveParticipant { participant, signed } => TransactionKind::Signed(
(b"RemoveParticipant", participant).encode(), borsh::to_vec(&(b"RemoveParticipant".as_slice(), participant)).unwrap(),
signed.to_tributary_signed(0), signed.to_tributary_signed(0),
), ),
Transaction::DkgParticipation { signed, .. } => { Transaction::DkgParticipation { signed, .. } => TransactionKind::Signed(
TransactionKind::Signed(b"DkgParticipation".encode(), signed.to_tributary_signed(0)) borsh::to_vec(b"DkgParticipation".as_slice()).unwrap(),
} signed.to_tributary_signed(0),
),
Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => TransactionKind::Signed( Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => TransactionKind::Signed(
(b"DkgConfirmation", attempt).encode(), borsh::to_vec(&(b"DkgConfirmation".as_slice(), attempt)).unwrap(),
signed.to_tributary_signed(0), signed.to_tributary_signed(0),
), ),
Transaction::DkgConfirmationShare { attempt, signed, .. } => TransactionKind::Signed( Transaction::DkgConfirmationShare { attempt, signed, .. } => TransactionKind::Signed(
(b"DkgConfirmation", attempt).encode(), borsh::to_vec(&(b"DkgConfirmation".as_slice(), attempt)).unwrap(),
signed.to_tributary_signed(1), signed.to_tributary_signed(1),
), ),
@@ -264,13 +264,14 @@ impl TransactionTrait for Transaction {
Transaction::Batch { .. } => TransactionKind::Provided("Batch"), Transaction::Batch { .. } => TransactionKind::Provided("Batch"),
Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed( Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed(
(b"Sign", id, attempt).encode(), borsh::to_vec(&(b"Sign".as_slice(), id, attempt)).unwrap(),
signed.to_tributary_signed(round.nonce()), signed.to_tributary_signed(round.nonce()),
), ),
Transaction::SlashReport { signed, .. } => { Transaction::SlashReport { signed, .. } => TransactionKind::Signed(
TransactionKind::Signed(b"SlashReport".encode(), signed.to_tributary_signed(0)) borsh::to_vec(b"SlashReport".as_slice()).unwrap(),
} signed.to_tributary_signed(0),
),
} }
} }
@@ -302,14 +303,14 @@ impl TransactionTrait for Transaction {
Transaction::Batch { .. } => {} Transaction::Batch { .. } => {}
Transaction::Sign { data, .. } => { Transaction::Sign { data, .. } => {
if data.len() > usize::from(MAX_KEY_SHARES_PER_SET) { if data.len() > usize::from(KeyShares::MAX_PER_SET) {
Err(TransactionError::InvalidContent)? Err(TransactionError::InvalidContent)?
} }
// TODO: MAX_SIGN_LEN // TODO: MAX_SIGN_LEN
} }
Transaction::SlashReport { slash_points, .. } => { Transaction::SlashReport { slash_points, .. } => {
if slash_points.len() > usize::from(MAX_KEY_SHARES_PER_SET) { if slash_points.len() > usize::from(KeyShares::MAX_PER_SET) {
Err(TransactionError::InvalidContent)? Err(TransactionError::InvalidContent)?
} }
} }

View File

@@ -7,10 +7,8 @@ db-urls = ["https://github.com/rustsec/advisory-db"]
yanked = "deny" yanked = "deny"
ignore = [ ignore = [
"RUSTSEC-2022-0061", # https://github.com/serai-dex/serai/227 "RUSTSEC-2024-0370", # `proc-macro-error` is unmaintained, in-tree due to Substrate/`litep2p`
"RUSTSEC-2024-0370", # proc-macro-error is unmaintained
"RUSTSEC-2024-0436", # paste is unmaintained "RUSTSEC-2024-0436", # paste is unmaintained
"RUSTSEC-2025-0057", # fxhash is unmaintained, fixed with bytecodealliance/wasmtime/pull/11634
] ]
[licenses] [licenses]
@@ -29,7 +27,6 @@ allow = [
"ISC", "ISC",
"Zlib", "Zlib",
"Unicode-3.0", "Unicode-3.0",
# "OpenSSL", # Commented as it's not currently in-use within the Serai tree
"CDLA-Permissive-2.0", "CDLA-Permissive-2.0",
# Non-invasive copyleft # Non-invasive copyleft
@@ -73,6 +70,7 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-monero-processor" }, { allow = ["AGPL-3.0-only"], name = "serai-monero-processor" },
{ allow = ["AGPL-3.0-only"], name = "tributary-sdk" }, { allow = ["AGPL-3.0-only"], name = "tributary-sdk" },
{ allow = ["AGPL-3.0-only"], name = "serai-cosign-types" },
{ allow = ["AGPL-3.0-only"], name = "serai-cosign" }, { allow = ["AGPL-3.0-only"], name = "serai-cosign" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-substrate" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-substrate" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-tributary" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-tributary" },
@@ -80,8 +78,9 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-libp2p-p2p" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-libp2p-p2p" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator" },
{ allow = ["AGPL-3.0-only"], name = "pallet-session" }, { allow = ["AGPL-3.0-only"], name = "substrate-median" },
{ allow = ["AGPL-3.0-only"], name = "serai-core-pallet" },
{ allow = ["AGPL-3.0-only"], name = "serai-coins-pallet" }, { allow = ["AGPL-3.0-only"], name = "serai-coins-pallet" },
{ allow = ["AGPL-3.0-only"], name = "serai-dex-pallet" }, { allow = ["AGPL-3.0-only"], name = "serai-dex-pallet" },
@@ -107,6 +106,7 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-message-queue-tests" }, { allow = ["AGPL-3.0-only"], name = "serai-message-queue-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-processor-tests" }, { allow = ["AGPL-3.0-only"], name = "serai-processor-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-tests" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-substrate-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-full-stack-tests" }, { allow = ["AGPL-3.0-only"], name = "serai-full-stack-tests" },
{ allow = ["AGPL-3.0-only"], name = "serai-reproducible-runtime-tests" }, { allow = ["AGPL-3.0-only"], name = "serai-reproducible-runtime-tests" },
] ]
@@ -124,12 +124,22 @@ multiple-versions = "warn"
wildcards = "warn" wildcards = "warn"
highlight = "all" highlight = "all"
deny = [ deny = [
# Contains a non-reproducible binary blob
# https://github.com/serde-rs/serde/pull/2514
# https://github.com/serde-rs/serde/issues/2575
{ name = "serde_derive", version = ">=1.0.172, <1.0.185" }, { name = "serde_derive", version = ">=1.0.172, <1.0.185" },
# Introduced an insecure implementation of `borsh` removed with `0.15.1`
# https://github.com/rust-lang/hashbrown/issues/576
{ name = "hashbrown", version = "=0.15.0" }, { name = "hashbrown", version = "=0.15.0" },
# Legacy which _no one_ should use anymore # Legacy which _no one_ should use anymore
{ name = "is-terminal", version = "*" }, { name = "is-terminal", version = "*" },
# Stop introduction into the tree without realizing it # Stop introduction into the tree without realizing it
{ name = "once_cell_polyfill", version = "*" }, { name = "once_cell_polyfill", version = "*" },
# Conflicts with our usage of mimalloc
# https://github.com/serai-dex/serai/issues/690
{ name = "tikv-jemalloc-sys", version = "*" },
] ]
[sources] [sources]
@@ -140,8 +150,5 @@ allow-git = [
"https://github.com/rust-lang-nursery/lazy-static.rs", "https://github.com/rust-lang-nursery/lazy-static.rs",
"https://github.com/kayabaNerve/elliptic-curves", "https://github.com/kayabaNerve/elliptic-curves",
"https://github.com/monero-oxide/monero-oxide", "https://github.com/monero-oxide/monero-oxide",
"https://github.com/kayabaNerve/monero-oxide",
"https://github.com/rust-bitcoin/rust-bip39",
"https://github.com/rust-rocksdb/rust-rocksdb",
"https://github.com/serai-dex/patch-polkadot-sdk", "https://github.com/serai-dex/patch-polkadot-sdk",
] ]

View File

@@ -1 +1 @@
3.3.4 3.3.10

View File

@@ -1,4 +1,4 @@
source 'https://rubygems.org' source 'https://rubygems.org'
gem "jekyll", "~> 4.3.3" gem "jekyll", "~> 4.4"
gem "just-the-docs", "0.8.2" gem "just-the-docs", "0.10.1"

View File

@@ -1,34 +1,39 @@
GEM GEM
remote: https://rubygems.org/ remote: https://rubygems.org/
specs: specs:
addressable (2.8.7) addressable (2.8.8)
public_suffix (>= 2.0.2, < 7.0) public_suffix (>= 2.0.2, < 8.0)
bigdecimal (3.1.8) base64 (0.3.0)
bigdecimal (3.3.1)
colorator (1.1.0) colorator (1.1.0)
concurrent-ruby (1.3.4) concurrent-ruby (1.3.5)
csv (3.3.5)
em-websocket (0.5.3) em-websocket (0.5.3)
eventmachine (>= 0.12.9) eventmachine (>= 0.12.9)
http_parser.rb (~> 0) http_parser.rb (~> 0)
eventmachine (1.2.7) eventmachine (1.2.7)
ffi (1.17.0-x86_64-linux-gnu) ffi (1.17.2-x86_64-linux-gnu)
forwardable-extended (2.6.0) forwardable-extended (2.6.0)
google-protobuf (4.28.2-x86_64-linux) google-protobuf (4.33.1-x86_64-linux-gnu)
bigdecimal bigdecimal
rake (>= 13) rake (>= 13)
http_parser.rb (0.8.0) http_parser.rb (0.8.0)
i18n (1.14.6) i18n (1.14.7)
concurrent-ruby (~> 1.0) concurrent-ruby (~> 1.0)
jekyll (4.3.4) jekyll (4.4.1)
addressable (~> 2.4) addressable (~> 2.4)
base64 (~> 0.2)
colorator (~> 1.0) colorator (~> 1.0)
csv (~> 3.0)
em-websocket (~> 0.5) em-websocket (~> 0.5)
i18n (~> 1.0) i18n (~> 1.0)
jekyll-sass-converter (>= 2.0, < 4.0) jekyll-sass-converter (>= 2.0, < 4.0)
jekyll-watch (~> 2.0) jekyll-watch (~> 2.0)
json (~> 2.6)
kramdown (~> 2.3, >= 2.3.1) kramdown (~> 2.3, >= 2.3.1)
kramdown-parser-gfm (~> 1.0) kramdown-parser-gfm (~> 1.0)
liquid (~> 4.0) liquid (~> 4.0)
mercenary (>= 0.3.6, < 0.5) mercenary (~> 0.3, >= 0.3.6)
pathutil (~> 0.9) pathutil (~> 0.9)
rouge (>= 3.0, < 5.0) rouge (>= 3.0, < 5.0)
safe_yaml (~> 1.0) safe_yaml (~> 1.0)
@@ -36,19 +41,20 @@ GEM
webrick (~> 1.7) webrick (~> 1.7)
jekyll-include-cache (0.2.1) jekyll-include-cache (0.2.1)
jekyll (>= 3.7, < 5.0) jekyll (>= 3.7, < 5.0)
jekyll-sass-converter (3.0.0) jekyll-sass-converter (3.1.0)
sass-embedded (~> 1.54) sass-embedded (~> 1.75)
jekyll-seo-tag (2.8.0) jekyll-seo-tag (2.8.0)
jekyll (>= 3.8, < 5.0) jekyll (>= 3.8, < 5.0)
jekyll-watch (2.2.1) jekyll-watch (2.2.1)
listen (~> 3.0) listen (~> 3.0)
just-the-docs (0.8.2) json (2.16.0)
just-the-docs (0.10.1)
jekyll (>= 3.8.5) jekyll (>= 3.8.5)
jekyll-include-cache jekyll-include-cache
jekyll-seo-tag (>= 2.0) jekyll-seo-tag (>= 2.0)
rake (>= 12.3.1) rake (>= 12.3.1)
kramdown (2.4.0) kramdown (2.5.1)
rexml rexml (>= 3.3.9)
kramdown-parser-gfm (1.1.0) kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0) kramdown (~> 2.0)
liquid (4.0.4) liquid (4.0.4)
@@ -58,27 +64,27 @@ GEM
mercenary (0.4.0) mercenary (0.4.0)
pathutil (0.16.2) pathutil (0.16.2)
forwardable-extended (~> 2.6) forwardable-extended (~> 2.6)
public_suffix (6.0.1) public_suffix (7.0.0)
rake (13.2.1) rake (13.3.1)
rb-fsevent (0.11.2) rb-fsevent (0.11.2)
rb-inotify (0.11.1) rb-inotify (0.11.1)
ffi (~> 1.0) ffi (~> 1.0)
rexml (3.3.7) rexml (3.4.4)
rouge (4.4.0) rouge (4.6.1)
safe_yaml (1.0.5) safe_yaml (1.0.5)
sass-embedded (1.79.3-x86_64-linux-gnu) sass-embedded (1.94.2-x86_64-linux-gnu)
google-protobuf (~> 4.27) google-protobuf (~> 4.31)
terminal-table (3.0.2) terminal-table (3.0.2)
unicode-display_width (>= 1.1.1, < 3) unicode-display_width (>= 1.1.1, < 3)
unicode-display_width (2.6.0) unicode-display_width (2.6.0)
webrick (1.8.2) webrick (1.9.2)
PLATFORMS PLATFORMS
x86_64-linux x86_64-linux
DEPENDENCIES DEPENDENCIES
jekyll (~> 4.3.3) jekyll (~> 4.4)
just-the-docs (= 0.8.2) just-the-docs (= 0.10.1)
BUNDLED WITH BUNDLED WITH
2.5.11 2.5.22

View File

@@ -46,7 +46,7 @@ serai-db = { path = "../common/db", optional = true }
serai-env = { path = "../common/env" } serai-env = { path = "../common/env" }
serai-primitives = { path = "../substrate/primitives", features = ["borsh"] } serai-primitives = { path = "../substrate/primitives", default-features = false, features = ["std"] }
[features] [features]
parity-db = ["serai-db/parity-db"] parity-db = ["serai-db/parity-db"]

View File

@@ -7,7 +7,7 @@ use dalek_ff_group::Ristretto;
pub(crate) use ciphersuite::{group::GroupEncoding, WrappedGroup, GroupCanonicalEncoding}; pub(crate) use ciphersuite::{group::GroupEncoding, WrappedGroup, GroupCanonicalEncoding};
pub(crate) use schnorr_signatures::SchnorrSignature; pub(crate) use schnorr_signatures::SchnorrSignature;
pub(crate) use serai_primitives::ExternalNetworkId; pub(crate) use serai_primitives::network_id::ExternalNetworkId;
pub(crate) use tokio::{ pub(crate) use tokio::{
io::{AsyncReadExt, AsyncWriteExt}, io::{AsyncReadExt, AsyncWriteExt},
@@ -198,7 +198,7 @@ async fn main() {
KEYS.write().unwrap().insert(service, key); KEYS.write().unwrap().insert(service, key);
let mut queues = QUEUES.write().unwrap(); let mut queues = QUEUES.write().unwrap();
if service == Service::Coordinator { if service == Service::Coordinator {
for network in serai_primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
queues.insert( queues.insert(
(service, Service::Processor(network)), (service, Service::Processor(network)),
RwLock::new(Queue(db.clone(), service, Service::Processor(network))), RwLock::new(Queue(db.clone(), service, Service::Processor(network))),
@@ -213,7 +213,7 @@ async fn main() {
}; };
// Make queues for each ExternalNetworkId // Make queues for each ExternalNetworkId
for network in serai_primitives::EXTERNAL_NETWORKS { for network in ExternalNetworkId::all() {
// Use a match so we error if the list of NetworkIds changes // Use a match so we error if the list of NetworkIds changes
let Some(key) = read_key(match network { let Some(key) = read_key(match network {
ExternalNetworkId::Bitcoin => "BITCOIN_KEY", ExternalNetworkId::Bitcoin => "BITCOIN_KEY",

View File

@@ -4,7 +4,7 @@ use ciphersuite::{group::GroupEncoding, FromUniformBytes, WrappedGroup, WithPref
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::ExternalNetworkId; use serai_primitives::network_id::ExternalNetworkId;
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub enum Service { pub enum Service {

View File

@@ -6,7 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/networks/bitcoin" repository = "https://github.com/serai-dex/serai/tree/develop/networks/bitcoin"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"] authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"]
edition = "2021" edition = "2021"
rust-version = "1.85" rust-version = "1.89"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@@ -155,6 +155,7 @@ impl Rpc {
Err(RpcError::RequestError(Error { code, message })) Err(RpcError::RequestError(Error { code, message }))
} }
// `invalidateblock` yields this edge case // `invalidateblock` yields this edge case
// TODO: https://github.com/core-json/core-json/issues/18
RpcResponse { result: None, error: None } => { RpcResponse { result: None, error: None } => {
if core::any::TypeId::of::<Response>() == core::any::TypeId::of::<()>() { if core::any::TypeId::of::<Response>() == core::any::TypeId::of::<()>() {
Ok(Default::default()) Ok(Default::default())

View File

@@ -1,50 +0,0 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
# This GPG-signed message exists to confirm the SHA256 sums of Monero binaries.
#
# Please verify the signature against the key for binaryFate in the
# source code repository (/utils/gpg_keys).
#
#
## CLI
4e1481835824b9233f204553d4a19645274824f3f6185d8a4b50198470752f54 monero-android-armv7-v0.18.4.3.tar.bz2
1aebd24aaaec3d1e87a64163f2e30ab2cd45f3902a7a859413f6870944775c21 monero-android-armv8-v0.18.4.3.tar.bz2
ff7b9c5cf2cb3d602c3dff1902ac0bc3394768cefc260b6003a9ad4bcfb7c6a4 monero-freebsd-x64-v0.18.4.3.tar.bz2
3ac83049bc565fb5238501f0fa629cdd473bbe94d5fb815088af8e6ff1d761cd monero-linux-armv7-v0.18.4.3.tar.bz2
b1cc5f135de3ba8512d56deb4b536b38c41addde922b2a53bf443aeaf2a5a800 monero-linux-armv8-v0.18.4.3.tar.bz2
95baaa6e8957b92caeaed7fb19b5c2659373df8dd5f4de2601ed3dae7b17ce2f monero-linux-riscv64-v0.18.4.3.tar.bz2
3a7b36ae4da831a4e9913e0a891728f4c43cd320f9b136cdb6686b1d0a33fafa monero-linux-x64-v0.18.4.3.tar.bz2
e0b51ca71934c33cb83cfa8535ffffebf431a2fc9efe3acf2baad96fb6ce21ec monero-linux-x86-v0.18.4.3.tar.bz2
bab9a6d3c2ca519386cff5ff0b5601642a495ed1a209736acaf354468cba1145 monero-mac-armv8-v0.18.4.3.tar.bz2
a8d8273b14f31569f5b7aa3063fbd322e3caec3d63f9f51e287dfc539c7f7d61 monero-mac-x64-v0.18.4.3.tar.bz2
bd9f615657c35d2d7dd9a5168ad54f1547dbf9a335dee7f12fab115f6f394e36 monero-win-x64-v0.18.4.3.zip
e642ed7bbfa34c30b185387fa553aa9c3ea608db1f3fc0e9332afa9b522c9c1a monero-win-x86-v0.18.4.3.zip
6ba5e082c8fa25216aba7aea8198f3e23d4b138df15c512457081e1eb3d03ff6 monero-source-v0.18.4.3.tar.bz2
#
## GUI
7b9255c696a462a00a810d9c8f94e60400a9e7d6438e8d6a8b693e9c13dca9ab monero-gui-install-win-x64-v0.18.4.3.exe
0bd84de0a7c18b2a3ea8e8eff2194ae000cf1060045badfd4ab48674bc1b9325 monero-gui-linux-x64-v0.18.4.3.tar.bz2
68ea30db32efb4a0671ec723297b6629d932fa188edf76edb38a37adaa3528e6 monero-gui-mac-armv8-v0.18.4.3.dmg
27243b01f030fdae68c59cae1daf21f530bbadeaf10579d2908db9a834191cee monero-gui-mac-x64-v0.18.4.3.dmg
dc9531cb4319b37b2c2dea4126e44a0fe6e7b6f34d278ccf5dd9ba693e3031e0 monero-gui-win-x64-v0.18.4.3.zip
0d44687644db9b1824f324416e98f4a46b3bb0a5ed09af54b2835b6facaa0cdd monero-gui-source-v0.18.4.3.tar.bz2
#
#
# ~binaryFate
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAmjntxwACgkQ8K9NRioL
35J57g//dUOY1KAoLNaV7XLJyGbNk1lT6c2+A8h1wkK6iNQhXsmnc6rcigsHXrG0
LQyVUuZJ6ELhNb6BnH5V0zbcB72t8XjkSEqlYhStUfMnaUvj1VdXtL/OnSs3fEvt
Zwz6QBTxIKDDYEYyXrvCK96cYaYlIOgK3IVn/zoHdrHRUqTXqRJkFoTHum5l783y
vr9BwMFFWYePUrilphjIiLyJDl+eB5al8PaJkqK2whxBUHoA2jF1edJOSq2mZajI
+L2fBYClePS8oqwoKGquqCh2RVcmdtXtQTVzRIoNx14qFzP8ymqa+6z1Ygkri7bV
qMCJk7KQ8ND7uU9NShpaCIqrZpr5GZ4Al6SRkcpK/7mipQcy2QpKJR3iOpcfiTX1
YmYGVmLB3zmHu2kiS0kogZv6Ob7+tVFzOQ8NZX4FVnpB0N0phqMfNFOfHzdQZrsZ
qg29HNc9sHlUmsOVmE5w+7Oq+s79yvQB3034XXi/9wQu+f8fKRhqZboe0fe77FLf
QXoAYrZZ7LnGz0Z75Q9O4RB7uxM0Ug5imvyEFus4iuBVyBWjgcfyLnbkKJtbXmfn
BZBbTProhPJfVa/VffBxW9HZB27W7O14oGWVpUkGWnVMZfVY/78XTUHwxaScQsPO
SGawjobQsB3pTMNr/kra1XTjkti70si8Fcs5ueYWGB3yfc6r3hU=
=5HRY
-----END PGP SIGNATURE-----

View File

@@ -0,0 +1,50 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
# This GPG-signed message exists to confirm the SHA256 sums of Monero binaries.
#
# Please verify the signature against the key for binaryFate in the
# source code repository (/utils/gpg_keys).
#
#
## CLI
7c2ad18ca3a1ad5bc603630ca935a753537a38a803e98d645edd6a3b94a5f036 monero-android-armv7-v0.18.4.4.tar.bz2
eb81b71f029884ab5fec76597be583982c95fd7dc3fc5f5083a422669cee311e monero-android-armv8-v0.18.4.4.tar.bz2
bc539178df23d1ae8b69569d9c328b5438ae585c0aacbebe12d8e7d387a745b0 monero-freebsd-x64-v0.18.4.4.tar.bz2
2040dc22748ef39ed8a755324d2515261b65315c67b91f449fa1617c5978910b monero-linux-armv7-v0.18.4.4.tar.bz2
b9daede195a24bdd05bba68cb5cb21e42c2e18b82d4d134850408078a44231c5 monero-linux-armv8-v0.18.4.4.tar.bz2
c939ea6e8002798f24a56ac03cbfc4ff586f70d7d9c3321b7794b3bcd1fa4c45 monero-linux-riscv64-v0.18.4.4.tar.bz2
7fe45ee9aade429ccdcfcad93b905ba45da5d3b46d2dc8c6d5afc48bd9e7f108 monero-linux-x64-v0.18.4.4.tar.bz2
8c174b756e104534f3d3a69fe68af66d6dc4d66afa97dfe31735f8d069d20570 monero-linux-x86-v0.18.4.4.tar.bz2
645e9bbae0275f555b2d72a9aa30d5f382df787ca9528d531521750ce2da9768 monero-mac-armv8-v0.18.4.4.tar.bz2
af3d98f09da94632db3e2f53c62cc612e70bf94aa5942d2a5200b4393cd9c842 monero-mac-x64-v0.18.4.4.tar.bz2
7eb3b87a105b3711361dd2b3e492ad14219d21ed8fd3dd726573a6cbd96e83a6 monero-win-x64-v0.18.4.4.zip
a148a2bd2b14183fb36e2cf917fce6f33fb687564db2ed53193b8432097ab398 monero-win-x86-v0.18.4.4.zip
84570eee26238d8f686605b5e31d59569488a3406f32e7045852de91f35508a2 monero-source-v0.18.4.4.tar.bz2
#
## GUI
4c81c8e97bd542daa453776d888557db1ceb2a718d43f6135ad68b12c8119948 monero-gui-install-win-x64-v0.18.4.4.exe
e45cb3fa9d972d67628cfed6463fb7604ae1414a11ba449f5e2f901c769ac788 monero-gui-linux-x64-v0.18.4.4.tar.bz2
a6f071719c401df339dba2d43ec6fffe103fda3e1df46f354b2496f34bb61cc4 monero-gui-mac-armv8-v0.18.4.4.dmg
811df70811a25f31289f24ebc0edc8f7648670384698d4c768bac5c2acbf2026 monero-gui-mac-x64-v0.18.4.4.dmg
b96faa56aa77cabed1f31f3fc9496e756a8da8c1124da2b9cb0b3730a8b6fbd9 monero-gui-win-x64-v0.18.4.4.zip
a7f6b91bc9efaa83173a397614626bf7612123e0017a48f66137ac397f7d19f8 monero-gui-source-v0.18.4.4.tar.bz2
#
#
# ~binaryFate
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAmkbGLgACgkQ8K9NRioL
35LWYRAAnPeUu7TADV9Nly2gBlwu7bMK6l7pcUzs3hHhCMpg/Zb7wF8lx4D/r/hT
3wf3gNVK6tYl5GMPpF7GSKvK35SSzNN+8khRd7vhRByG75LGLnrNlcBsQU2wOzUv
Rmm2R8L8GP0B/+zXO92uJDMZ7Q7x72O+3fVX05217HBwz2kvzE1NpXe+EJPnUukA
Tr5CRnxKhxPbilvIhoEHdwkScMZqHMfsbdrefrB3KpO3xEaUz+gO9wESp7nzr4vp
Du6gJYBPK25Z2heZHCRsGN4WQP4QQv4MC0IFczc9fkVDBjywsJeNRRUbGtxR/BNt
vNJGI/kS+7KV140j6GkqAh/leZcaVJ5LRyCaHAwEQNA2T5okhrM0WZpoOAsZMi5K
bW4lNOXfWSw6/tokEPeuoi49yw0f9z0C8a4VLNOZGWKqmHcsA8WE6oVfmvVk6xWu
BqTU1Z9LJqL17GWRAReSX1ZuNA0Q0Pb/klUwP4X2afJcCVZ2YeBNr4jr21u3dYXY
QiLj0Gv7gg7a/GiMpVglNn5GzCu6mT0D94sbMNK+U5Tbve7aOtijJZ8JR62eO/mR
h+oNEys/xEcP9PQ5p74cNL71hNSfWSOcNi+GLSgXC75vsOGr7i96uaamilsHnsYB
p8PZMHzOf1pi6i/L5oOEuRgaujd9IjyCbxoYh3bbxxjBOhNEMqU=
=CVLA
-----END PGP SIGNATURE-----

View File

@@ -0,0 +1,166 @@
# Raises `PT_GNU_STACK`'s memory to be at least 8 MB.
#
# This causes `musl` to use a 8 MB default for new threads, resolving the primary
# compatibility issue faced when executing a program on a `musl` system.
#
# See https://wiki.musl-libc.org/functional-differences-from-glibc.html#Thread-stack-size
# for reference. This differs that instead of setting at time of link, it
# patches the binary as an already-linked ELF executable.
#!/bin/bash
set -eo pipefail
ELF="$1"
if [ ! -f "$ELF" ]; then
echo "\`increase_default_stack_size.sh\` [ELF binary]"
echo ""
echo "Sets the \`PT_GNU_STACK\` program header to its existing value or 8 MB,"
echo "whichever is greater."
exit 1
fi
function hex {
hexdump -e '1 1 "%.2x"' -v
}
function read_bytes {
dd status=none bs=1 skip=$1 count=$2 if="$ELF" | hex
}
function write_bytes {
POS=$1
BYTES=$2
while [ ! $BYTES = "" ]; do
printf "\x$(printf $BYTES | head -c2)" | dd status=none conv=notrunc bs=1 seek=$POS of="$ELF"
# Start with the third byte, as in, after the first two bytes
BYTES=$(printf $BYTES | tail -c+3)
POS=$(($POS + 1))
done
}
# Magic
MAGIC=$(read_bytes 0 4)
if [ ! $MAGIC = $(printf "\x7fELF" | hex) ]; then
echo "Not ELF"
exit 2
fi
# 1 if 32-bit, 2 if 64-bit
BITS=$(read_bytes 4 1)
case $BITS in
"01") BITS=32;;
"02") BITS=64;;
*)
echo "Not 32- or 64- bit"
exit 3
;;
esac
# For `value_per_bits a b`, `a` if 32-bit and `b` if 64-bit
function value_per_bits {
RESULT=$(($1))
if [ $BITS = 64 ]; then
RESULT=$(($2))
fi
printf $RESULT
}
# Read an integer by its offset, differing depending on if 32- or 64-bit
function read_integer_by_offset {
OFFSET=$(value_per_bits $1 $2)
printf $(( 0x$(swap_native_endian $(read_bytes $OFFSET $3)) ))
}
# 1 if little-endian, 2 if big-endian
LITTLE_ENDIAN=$(read_bytes 5 1)
case $LITTLE_ENDIAN in
"01") LITTLE_ENDIAN=1;;
"02") LITTLE_ENDIAN=0;;
*)
echo "Not little- or big- endian"
exit 4
;;
esac
# While this script is written in big-endian, we need to work with the file in
# its declared endian. This function swaps from big to native, or vice versa,
# as necessary.
function swap_native_endian {
BYTES="$1"
if [ "$BYTES" = "" ]; then
read BYTES
fi
if [ $LITTLE_ENDIAN -eq 0 ]; then
printf $BYTES
return
fi
while [ ! $BYTES = "" ]; do
printf $(printf $BYTES | tail -c2)
BYTES=$(printf $BYTES | head -c-2)
done
}
ELF_VERSION=$(read_bytes 6 1)
if [ ! $ELF_VERSION = "01" ]; then
echo "Unknown ELF Version ($ELF_VERSION)"
exit 5
fi
ELF_VERSION_2=$(read_bytes $((0x14)) 4)
if [ ! $ELF_VERSION_2 = $(swap_native_endian 00000001) ]; then
echo "Unknown secondary ELF Version ($ELF_VERSION_2)"
exit 6
fi
# Find where the program headers are
PROGRAM_HEADERS_OFFSET=$(read_integer_by_offset 0x1c 0x20 $(value_per_bits 4 8))
PROGRAM_HEADER_SIZE=$(value_per_bits 0x20 0x38)
DECLARED_PROGRAM_HEADER_SIZE=$(read_integer_by_offset 0x2a 0x36 2)
if [ ! $PROGRAM_HEADER_SIZE -eq $DECLARED_PROGRAM_HEADER_SIZE ]; then
echo "Unexpected size of a program header ($DECLARED_PROGRAM_HEADER_SIZE)"
exit 7
fi
function program_header_start {
printf $(($PROGRAM_HEADERS_OFFSET + ($1 * $PROGRAM_HEADER_SIZE)))
}
function read_program_header {
read_bytes $(program_header_start $1) $PROGRAM_HEADER_SIZE
}
# Iterate over each program header
PROGRAM_HEADERS=$(read_integer_by_offset 0x2c 0x38 2)
NEXT_PROGRAM_HEADER=$(( $PROGRAM_HEADERS - 1 ))
FOUND=0
while [ $NEXT_PROGRAM_HEADER -ne -1 ]; do
THIS_PROGRAM_HEADER=$NEXT_PROGRAM_HEADER
NEXT_PROGRAM_HEADER=$(( $NEXT_PROGRAM_HEADER - 1 ))
PROGRAM_HEADER=$(read_program_header $THIS_PROGRAM_HEADER)
HEADER_TYPE=$(printf $PROGRAM_HEADER | head -c8)
# `PT_GNU_STACK`
# https://github.com/torvalds/linux/blob/c2f2b01b74be8b40a2173372bcd770723f87e7b2/include/uapi/linux/elf.h#L41
if [ ! "$(swap_native_endian $HEADER_TYPE)" = "6474e551" ]; then
continue
fi
FOUND=1
MEMSZ_OFFSET=$(( $(program_header_start $THIS_PROGRAM_HEADER) + $(value_per_bits 0x14 0x28) ))
MEMSZ_LEN=$(value_per_bits 4 8)
# `MEMSZ_OFFSET MEMSZ_OFFSET` as we've already derived it depending on the amount of bits
MEMSZ=$(read_integer_by_offset $MEMSZ_OFFSET $MEMSZ_OFFSET $MEMSZ_LEN)
DESIRED_STACK_SIZE=$((8 * 1024 * 1024))
# Only run if the inherent value is _smaller_
if [ $MEMSZ -lt $DESIRED_STACK_SIZE ]; then
# `2 *`, as this is its length in hexadecimal
HEX_MEMSZ=$(printf %."$((2 * $MEMSZ_LEN))"x $DESIRED_STACK_SIZE)
write_bytes $MEMSZ_OFFSET $(swap_native_endian $HEX_MEMSZ)
fi
done
if [ $FOUND -eq 0 ]; then
echo "\`PT_GNU_STACK\` program header not found"
exit 8
fi
echo "All instances of \`PT_GNU_STACK\` patched to be at least 8 MB"
exit 0

View File

@@ -1,13 +1,12 @@
# rust:1.91.1-alpine as of November 11th, 2025 (GMT) #check=skip=FromPlatformFlagConstDisallowed
FROM --platform=linux/amd64 rust@sha256:700c0959b23445f69c82676b72caa97ca4359decd075dca55b13339df27dc4d3 AS deterministic # We want to explicitly set the platform to ensure a constant host environment
RUN apk add musl-dev=1.2.5-r10 # rust:1.91.1-alpine as of December 4th, 2025 (GMT)
FROM --platform=linux/amd64 rust@sha256:84f263251b0ada72c1913d82a824d47be15a607f3faf015d8bdae48db544cdf2 AS builder
# Add the wasm toolchain # Add the WASM toolchain
RUN rustup target add wasm32v1-none RUN rustup target add wasm32v1-none
FROM deterministic
# Add files for build # Add files for build
ADD patches /serai/patches ADD patches /serai/patches
ADD common /serai/common ADD common /serai/common
@@ -27,8 +26,17 @@ ADD AGPL-3.0 /serai
WORKDIR /serai WORKDIR /serai
# Build the runtime, copying it to the volume if it exists # Build the runtime
ENV RUSTFLAGS="-Ctarget-feature=-crt-static" RUN cargo build --release -p serai-runtime --no-default-features
CMD cargo build --release -p serai-runtime && \
mkdir -p /volume && \ # Copy the artifact to its own image which solely exists to further export it
cp /serai/target/release/wbuild/serai-runtime/serai_runtime.wasm /volume/serai.wasm FROM scratch
# Copy `busybox`, including the necessary shared libraries, from the builder for a functioning `cp`
COPY --from=builder /lib/ld-musl-x86_64.so.1 /lib/libc.musl-x86_64.so.1 /lib/
COPY --from=builder /bin/busybox /bin/
ENV LD_LIBRARY_PATH=/lib/
ENV PATH=/bin
# Copy the artifact itself
COPY --from=builder /serai/target/release/serai_runtime.wasm /serai.wasm
# By default, copy the artifact to `/volume`, presumably a provided volume
CMD ["busybox", "cp", "/serai.wasm", "/volume/serai.wasm"]

View File

@@ -16,7 +16,7 @@ pub fn coordinator(
) { ) {
let db = network.db(); let db = network.db();
let longer_reattempts = if network == Network::Dev { "longer-reattempts" } else { "" }; let longer_reattempts = if network == Network::Dev { "longer-reattempts" } else { "" };
let setup = mimalloc(Os::Debian).to_string() + let setup = mimalloc(Os::Debian) +
&build_serai_service( &build_serai_service(
"", "",
Os::Debian, Os::Debian,

View File

@@ -3,7 +3,7 @@ use std::path::Path;
use crate::{Network, Os, mimalloc, os, build_serai_service, write_dockerfile}; use crate::{Network, Os, mimalloc, os, build_serai_service, write_dockerfile};
pub fn ethereum_relayer(orchestration_path: &Path, network: Network) { pub fn ethereum_relayer(orchestration_path: &Path, network: Network) {
let setup = mimalloc(Os::Debian).to_string() + let setup = mimalloc(Os::Debian) +
&build_serai_service( &build_serai_service(
"", "",
Os::Debian, Os::Debian,

View File

@@ -13,7 +13,7 @@ pub fn message_queue(
ethereum_key: <Ristretto as WrappedGroup>::G, ethereum_key: <Ristretto as WrappedGroup>::G,
monero_key: <Ristretto as WrappedGroup>::G, monero_key: <Ristretto as WrappedGroup>::G,
) { ) {
let setup = mimalloc(Os::Alpine).to_string() + let setup = mimalloc(Os::Alpine) +
&build_serai_service("", Os::Alpine, network.release(), network.db(), "serai-message-queue"); &build_serai_service("", Os::Alpine, network.release(), network.db(), "serai-message-queue");
let env_vars = [ let env_vars = [

View File

@@ -1,36 +1,85 @@
use crate::Os; use crate::Os;
pub fn mimalloc(os: Os) -> &'static str { // 2.2.4
const ALPINE_MIMALLOC: &str = r#" const MIMALLOC_VERSION: &str = "fbd8b99c2b828428947d70fdc046bb55609be93e";
const FLAGS: &str =
"-DMI_SECURE=ON -DMI_GUARDED=ON -DMI_BUILD_STATIC=OFF -DMI_BUILD_OBJECT=OFF -DMI_BUILD_TESTS=OFF";
pub fn mimalloc(os: Os) -> String {
let build_script = |env, flags| {
format!(
r#"
#!/bin/sh
set -e
git clone https://github.com/microsoft/mimalloc
cd mimalloc
git checkout {MIMALLOC_VERSION}
# For some reason, `mimalloc` contains binary blobs in the repository, so we remove those now
rm -rf .git ./bin
mkdir -p out/secure
cd out/secure
# `CMakeLists.txt` requires a C++ compiler but `mimalloc` does not use one by default. We claim
# there is a working C++ compiler so CMake doesn't complain, allowing us to not unnecessarily
# install one. If it was ever invoked, our choice of `false` would immediately let us know.
# https://github.com/microsoft/mimalloc/issues/1179
{env} CXX=false cmake -DCMAKE_CXX_COMPILER_WORKS=1 {FLAGS} ../..
make
cd ../..
# Copy the built library to the original directory
cd ..
cp mimalloc/out/secure/libmimalloc-secure.so ./libmimalloc.so
# Clean up the source directory
rm -rf ./mimalloc
"#
)
};
let build_commands = |env, flags| {
let mut result = String::new();
for line in build_script(env, flags)
.lines()
.map(|line| {
assert!(!line.contains('"'));
format!(r#"RUN echo "{line}" >> ./mimalloc.sh"#)
})
.chain(["RUN /bin/sh ./mimalloc.sh", "RUN rm ./mimalloc.sh"].into_iter().map(str::to_string))
{
result.push_str(&line);
result.push('\n');
}
result
};
let alpine_build = build_commands("CC=$(uname -m)-alpine-linux-musl-gcc", "-DMI_LIBC_MUSL=ON");
let debian_build = build_commands("", "");
let alpine_mimalloc = format!(
r#"
FROM alpine:latest AS mimalloc-alpine FROM alpine:latest AS mimalloc-alpine
RUN apk update && apk upgrade && apk --no-cache add gcc g++ libc-dev make cmake git RUN apk update && apk upgrade && apk --no-cache add musl-dev gcc make cmake git
RUN git clone https://github.com/microsoft/mimalloc && \
cd mimalloc && \
git checkout fbd8b99c2b828428947d70fdc046bb55609be93e && \
mkdir -p out/secure && \
cd out/secure && \
cmake -DMI_SECURE=ON -DMI_GUARDED=on ../.. && \
make && \
cp ./libmimalloc-secure.so ../../../libmimalloc.so
"#;
const DEBIAN_MIMALLOC: &str = r#" {alpine_build}
"#
);
let debian_mimalloc = format!(
r#"
FROM debian:trixie-slim AS mimalloc-debian FROM debian:trixie-slim AS mimalloc-debian
RUN apt update && apt upgrade -y && apt install -y gcc g++ make cmake git RUN apt update && apt upgrade -y && apt install -y gcc make cmake git
RUN git clone https://github.com/microsoft/mimalloc && \
cd mimalloc && \ {debian_build}
git checkout fbd8b99c2b828428947d70fdc046bb55609be93e && \ "#
mkdir -p out/secure && \ );
cd out/secure && \
cmake -DMI_SECURE=ON -DMI_GUARDED=on ../.. && \
make && \
cp ./libmimalloc-secure.so ../../../libmimalloc.so
"#;
match os { match os {
Os::Alpine => ALPINE_MIMALLOC, Os::Alpine => alpine_mimalloc,
Os::Debian => DEBIAN_MIMALLOC, Os::Debian => debian_mimalloc,
} }
} }

View File

@@ -29,7 +29,7 @@ RUN tar xzvf bitcoin-${BITCOIN_VERSION}-$(uname -m)-linux-gnu.tar.gz
RUN mv bitcoin-${BITCOIN_VERSION}/bin/bitcoind . RUN mv bitcoin-${BITCOIN_VERSION}/bin/bitcoind .
"#; "#;
let setup = mimalloc(Os::Debian).to_string() + DOWNLOAD_BITCOIN; let setup = mimalloc(Os::Alpine) + DOWNLOAD_BITCOIN;
let run_bitcoin = format!( let run_bitcoin = format!(
r#" r#"
@@ -43,7 +43,7 @@ CMD ["/run.sh"]
network.label() network.label()
); );
let run = os(Os::Debian, "", "bitcoin") + &run_bitcoin; let run = os(Os::Alpine, "", "bitcoin") + &run_bitcoin;
let res = setup + &run; let res = setup + &run;
let mut bitcoin_path = orchestration_path.to_path_buf(); let mut bitcoin_path = orchestration_path.to_path_buf();

View File

@@ -17,7 +17,7 @@ pub fn ethereum(orchestration_path: &Path, network: Network) {
(reth(network), nimbus(network)) (reth(network), nimbus(network))
}; };
let download = mimalloc(Os::Alpine).to_string() + &el_download + &cl_download; let download = mimalloc(Os::Alpine) + &el_download + &cl_download;
let run = format!( let run = format!(
r#" r#"
@@ -26,7 +26,7 @@ CMD ["/run.sh"]
"#, "#,
network.label() network.label()
); );
let run = mimalloc(Os::Debian).to_string() + let run = mimalloc(Os::Debian) +
&os(Os::Debian, &(el_run_as_root + "\r\n" + &cl_run_as_root), "ethereum") + &os(Os::Debian, &(el_run_as_root + "\r\n" + &cl_run_as_root), "ethereum") +
&el_run + &el_run +
&cl_run + &cl_run +

View File

@@ -10,7 +10,7 @@ fn monero_internal(
monero_binary: &str, monero_binary: &str,
ports: &str, ports: &str,
) { ) {
const MONERO_VERSION: &str = "0.18.4.3"; const MONERO_VERSION: &str = "0.18.4.4";
let arch = match std::env::consts::ARCH { let arch = match std::env::consts::ARCH {
// We probably would run this without issues yet it's not worth needing to provide support for // We probably would run this without issues yet it's not worth needing to provide support for
@@ -21,7 +21,7 @@ fn monero_internal(
}; };
#[rustfmt::skip] #[rustfmt::skip]
let download_monero = format!(r#" let mut download_monero = format!(r#"
FROM alpine:latest AS monero FROM alpine:latest AS monero
RUN apk --no-cache add wget gnupg RUN apk --no-cache add wget gnupg
@@ -41,7 +41,17 @@ RUN tar -xvjf monero-linux-{arch}-v{MONERO_VERSION}.tar.bz2 --strip-components=1
network.label(), network.label(),
); );
let setup = mimalloc(os).to_string() + &download_monero; if os == Os::Alpine {
// Increase the default stack size, as Monero does heavily use its stack
download_monero += &format!(
r#"
ADD orchestration/increase_default_stack_size.sh .
RUN ./increase_default_stack_size.sh {monero_binary}
"#
);
}
let setup = mimalloc(os) + &download_monero;
let run_monero = format!( let run_monero = format!(
r#" r#"
@@ -69,13 +79,13 @@ CMD ["/run.sh"]
} }
pub fn monero(orchestration_path: &Path, network: Network) { pub fn monero(orchestration_path: &Path, network: Network) {
monero_internal(network, Os::Debian, orchestration_path, "monero", "monerod", "18080 18081") monero_internal(network, Os::Alpine, orchestration_path, "monero", "monerod", "18080 18081")
} }
pub fn monero_wallet_rpc(orchestration_path: &Path) { pub fn monero_wallet_rpc(orchestration_path: &Path) {
monero_internal( monero_internal(
Network::Dev, Network::Dev,
Os::Debian, Os::Alpine,
orchestration_path, orchestration_path,
"monero-wallet-rpc", "monero-wallet-rpc",
"monero-wallet-rpc", "monero-wallet-rpc",

View File

@@ -17,7 +17,7 @@ pub fn processor(
substrate_evrf_key: Zeroizing<Vec<u8>>, substrate_evrf_key: Zeroizing<Vec<u8>>,
network_evrf_key: Zeroizing<Vec<u8>>, network_evrf_key: Zeroizing<Vec<u8>>,
) { ) {
let setup = mimalloc(Os::Debian).to_string() + let setup = mimalloc(Os::Debian) +
&build_serai_service( &build_serai_service(
if coin == "ethereum" { if coin == "ethereum" {
r#" r#"

View File

@@ -12,10 +12,7 @@ pub fn serai(
serai_key: &Zeroizing<<Ristretto as WrappedGroup>::F>, serai_key: &Zeroizing<<Ristretto as WrappedGroup>::F>,
) { ) {
// Always builds in release for performance reasons // Always builds in release for performance reasons
let setup = let setup = mimalloc(Os::Debian) + &build_serai_service("", Os::Debian, true, "", "serai-node");
mimalloc(Os::Debian).to_string() + &build_serai_service("", Os::Debian, true, "", "serai-node");
let setup_fast_epoch = mimalloc(Os::Debian).to_string() +
&build_serai_service("", Os::Debian, true, "fast-epoch", "serai-node");
let env_vars = [("KEY", hex::encode(serai_key.to_repr()))]; let env_vars = [("KEY", hex::encode(serai_key.to_repr()))];
let mut env_vars_str = String::new(); let mut env_vars_str = String::new();
@@ -40,16 +37,9 @@ CMD {env_vars_str} "/run.sh"
let run = os(Os::Debian, "", "serai") + &run_serai; let run = os(Os::Debian, "", "serai") + &run_serai;
let res = setup + &run; let res = setup + &run;
let res_fast_epoch = setup_fast_epoch + &run;
let mut serai_path = orchestration_path.to_path_buf(); let mut serai_path = orchestration_path.to_path_buf();
serai_path.push("serai"); serai_path.push("serai");
let mut serai_fast_epoch_path = serai_path.clone();
serai_path.push("Dockerfile"); serai_path.push("Dockerfile");
serai_fast_epoch_path.push("Dockerfile.fast-epoch");
write_dockerfile(serai_path, &res); write_dockerfile(serai_path, &res);
write_dockerfile(serai_fast_epoch_path, &res_fast_epoch);
} }

View File

@@ -12,8 +12,7 @@ edition = "2021"
all-features = true all-features = true
rustdoc-args = ["--cfg", "docsrs"] rustdoc-args = ["--cfg", "docsrs"]
[lints] [workspace]
workspace = true
[dependencies] [dependencies]
std-shims = { path = "../../common/std-shims", version = "0.1.4", default-features = false, optional = true } std-shims = { path = "../../common/std-shims", version = "0.1.4", default-features = false, optional = true }

View File

@@ -26,7 +26,7 @@ impl<C: ciphersuite::GroupIo> Ciphersuite for C {
#[cfg(feature = "alloc")] #[cfg(feature = "alloc")]
fn read_G<R: io::Read>(reader: &mut R) -> io::Result<Self::G> { fn read_G<R: io::Read>(reader: &mut R) -> io::Result<Self::G> {
<C as ciphersuite::GroupIo>::read_G(reader) <C as ciphersuite::GroupIo>::read_G(reader)
} }
} }
#[cfg(feature = "ed25519")] #[cfg(feature = "ed25519")]

View File

@@ -7,14 +7,12 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dalek-ff-gr
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["curve25519", "ed25519", "ristretto", "dalek", "group"] keywords = ["curve25519", "ed25519", "ristretto", "dalek", "group"]
edition = "2021" edition = "2021"
rust-version = "1.85"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
rustdoc-args = ["--cfg", "docsrs"] rustdoc-args = ["--cfg", "docsrs"]
[lints] [workspace]
workspace = true
[dependencies] [dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false } dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false }

View File

@@ -33,12 +33,4 @@ impl FieldElement {
u256 = u256.wrapping_sub(&MODULUS); u256 = u256.wrapping_sub(&MODULUS);
} }
} }
/// Create a `FieldElement` from the reduction of a 512-bit number.
///
/// The bytes are interpreted in little-endian format.
#[deprecated]
pub fn wide_reduce(value: [u8; 64]) -> Self {
<FieldElement as prime_field::ff::FromUniformBytes<_>>::from_uniform_bytes(&value)
}
} }

View File

@@ -12,5 +12,7 @@ edition = "2021"
all-features = true all-features = true
rustdoc-args = ["--cfg", "docsrs"] rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[dependencies] [dependencies]
darling = { version = "0.21" } darling = { version = "0.21" }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "directories-next" name = "directories-next"
version = "2.0.0" version = "2.0.99"
description = "Patch from directories-next back to directories" description = "Patch from directories-next back to directories"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/directories-next" repository = "https://github.com/serai-dex/serai/tree/develop/patches/directories-next"
@@ -12,5 +12,7 @@ edition = "2021"
all-features = true all-features = true
rustdoc-args = ["--cfg", "docsrs"] rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[dependencies] [dependencies]
directories = "5" directories = "6"

View File

@@ -0,0 +1,19 @@
[package]
name = "alloy-eip2124"
version = "0.2.99"
description = "Patch to an empty crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/alloy-eip2124"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[features]
std = []
serde = []

Some files were not shown because too many files have changed in this diff Show More